Compare commits

..

220 Commits

Author SHA1 Message Date
Will Miao
0cc640cfaa fix(recipe): support ComfyUI-Easy-Use nodes in runtime metadata extraction (#920)
- Add EasyComfyLoaderExtractor for comfyLoader (easy comfyLoader):
  extracts checkpoint, optional_lora_stack as LoRA apply node,
  prompt text, clip_skip, and latent dimensions
- Add EasyPreSamplingExtractor for samplerSettings (easy preSampling):
  extracts steps, cfg, sampler_name, scheduler, denoise, seed
- Add EasySeedExtractor for easySeed
- Fix clip_skip hardcoded to '1' — now searched from SAMPLING metadata
- Lora Stacker nodes intentionally excluded from extraction to
  prevent double-counting; LoRAs only recorded at apply nodes
2026-05-02 23:21:51 +08:00
Will Miao
2ac0eb0f9d fix(wanvideo): resolve lora path resolution and name truncation for extra folder paths
- Use get_lora_info_absolute to obtain correct absolute paths for loras
  in LM extra folder paths, instead of folder_paths.get_full_path which
  only searches ComfyUI's standard loras directories (returned None)
- Fix name field truncation: str.split('.')[0] stopped at the first dot,
  replaced with os.path.splitext to only strip the file extension
- Add _relpath_within_loras helper to preserve subdirectory info in the
  name field, matching WanVideoWrapper's os.path.splitext(lora)[0] format
2026-05-02 14:55:12 +08:00
Will Miao
f028625ce9 feat(check-models-exist): add batch endpoint for checking multiple model IDs
New endpoint: GET /api/lm/check-models-exist?modelIds=1,2,3,...

Accepts comma-separated modelIds, returns a results array with one
entry per modelId. Uses a single scanner lookup batch - three
service-registry calls total, regardless of model count. Skips
history checks entirely (same rationale as the singleton endpoint:
when models exist locally, history is redundant).

Expected: reduces 231 HTTP round-trips to 1 for the browser
extension's model-card indicator flow. Combined with the prior
SQLite-connection and history-skip fixes, total wall-clock time
for a 175K-lora user's page load drops from ~9.4s to <10ms.
2026-05-02 13:43:53 +08:00
Will Miao
06acc7f576 fix(trigger-word-toggle): default group children to active regardless of default_active 2026-05-02 13:33:42 +08:00
Will Miao
d324b57274 perf(check-model-exists): eliminate SQLite connection-per-query overhead and skip redundant history checks
Root cause: 231 concurrent /check-model-exists requests on 175K-lora library
caused ~9.4s wall clock time. The bottleneck was two-fold:

1. DownloadedVersionHistoryService opened a new sqlite3.connect() for every
   query under asyncio.Lock. With a large WAL from 175K entries, each
   connect() took ~8ms. Serialized by the lock across 231 requests, the
   230th request waited ~1848ms just for lock acquisition.

2. check_model_exists always queried download history even when the model
   was found locally. The history result (hasBeenDownloaded /
   downloadedVersionIds) is only used by the UI when the model is NOT
   found locally; when found, the 'in library' indicator takes priority.

Changes:
- downloaded_version_history_service.py: added persistent _get_conn() that
  creates the SQLite connection once and reuses it across all queries
- misc_handlers.py: early-return from check_model_exists when the model
  exists locally, bypassing the history service entirely (lock skipped)

Expected: per-request wait time drops from ~1912ms to <3ms, wall clock
from ~9.4s to <0.3s for the 175K-lora user's 231-card page.
2026-05-02 13:31:20 +08:00
Will Miao
502b7eab31 fix(layout): correct breadcrumb sticky behavior and controls wrapping overflow
- Extract breadcrumb from controls template into sibling component
- Fix breadcrumb sticky positioning (top: 0, z-index: calc(--z-header - 1))
- Add 1500px breakpoint to wrap controls-right and prevent overflow
- Adjust breadcrumb padding-bottom to cover controls-right area when sticky
2026-05-01 22:53:40 +08:00
Will Miao
be75ad930e feat(layout): implement responsive edge-to-edge card grid with density-aware column calculation
- Add dynamic column calculation based on container width and min card width
- Prevent tiny cards on narrow windows by respecting density-based minimums:
  - Default: 240px, Medium: 200px, Compact: 170px
- Fix edge-to-edge layout with proper CSS selector (.virtual-scroll-item.model-card)
- Add hamburger menu for mobile/small screens with proper translations
- Update all locale files with 'common.actions.menu' key

Fixes: Cards becoming too small/overlapping on narrow window widths (e.g., 1156px)
Changes: 15 files, +569/-114 lines
2026-05-01 21:34:31 +08:00
Will Miao
763c4f4dad feat(usage-control): add support for Civitai usageControl field
Handle models that are only available for on-site generation (usageControl:
"Generation" or "InternalGeneration") rather than downloadable.

Backend changes:
- Add usage_control field to ModelVersionRecord dataclass
- Extract usageControl from Civitai API responses
- Filter non-downloadable versions from update availability checks
- Add database schema migration for usage_control column
- Include usageControl in version response JSON

Frontend changes:
- Add isDownloadAllowed() helper function
- Show disabled download button for non-downloadable versions
- Add "On-Site Only" badge for restricted versions
- Update resolveUpdateAvailability() to filter non-downloadable versions
- Add CSS styling for disabled action button

Internationalization:
- Add translations for onSiteOnly badge and downloadNotAllowedTooltip
- Complete translations for all 10 supported languages
2026-05-01 13:10:15 +08:00
Will Miao
d32c492bdb feat(scripts): add legacy metadata migration tool
Add script to migrate metadata from legacy sidecar JSON files to
LoRA Manager's metadata.json format.

Features:
- Auto-discovers model folders from settings.json
- Supports LoRA and Checkpoint model types
- Migrates activation text, preferred weight (LoRA only), and notes
- Dry-run mode for safe preview
- Idempotent migration (won't duplicate existing data)
2026-05-01 08:56:00 +08:00
Will Miao
5dcfde36ea feat(doctor): add duplicate filename conflict detection and one-click resolution
Detects when multiple model files share the same basename (causing
ambiguity in LoRA resolution), logs warnings during scanning, and
provides a "Resolve Conflicts" button in the Doctor panel. Resolution
renames duplicates with hash-prefixed unique filenames, migrates all
sidecar and preview files, and updates the cache and frontend scroller
in-place so the model modal immediately reflects the new filename.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-30 15:21:26 +08:00
Will Miao
1d035361a4 fix(download): accept Diffusion Model file type when selecting primary file from CivitAI metadata
CivitAI returns file type "Diffusion Model" for checkpoint files (e.g., Anima
models), but the file selection logic only accepted "Model" and "Negative",
causing "No suitable file found in metadata" errors.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-30 11:54:14 +08:00
Will Miao
25605c5e78 feat(ui): add setting to toggle version name display on model cards (#916) 2026-04-29 20:04:40 +08:00
Will Miao
f3268a6179 fix(autocomplete): prevent migrateWidgetsValues from dropping text widget values (#915)
shouldBypassAutocompleteWidgetMigration only matched inputs by widget name,
but ComfyUI's migrateWidgetsValues also matches forceInput inputs (like "seed").
This discrepancy meant the bypass never triggered for TextLM/PromptLM nodes,
causing migrateWidgetsValues to filter out real widget values by incorrectly
mapping forceInput flags onto saved autocomplete values.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-29 16:44:08 +08:00
Will Miao
055e94d77b fix(updates): chunk bulk queries to avoid SQLite variable limit (#914)
_split _get_records_bulk into 500-id batches so the WHERE IN clause
never exceeds SQLite's 999-parameter ceiling.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-28 19:15:44 +08:00
Will Miao
47fcd530a0 feat(settings): add aria2 wiki help link to download backend setting 2026-04-28 18:37:59 +08:00
Will Miao
3c32b9e088 feat(example-images): add wiki help link and i18n keys for remote open mode
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-27 19:45:16 +08:00
Will Miao
ffe0670a27 feat(example-images): add remote open mode support 2026-04-27 14:05:21 +08:00
Will Miao
cc147a1795 fix(metadata): preserve workflow when recipe images convert to webp 2026-04-25 07:50:51 +08:00
Will Miao
e81409bea4 fix(i18n): shorten bulk delete labels 2026-04-25 07:21:42 +08:00
Will Miao
b31fae4e51 fix(widgets): isolate autocomplete text cleanup 2026-04-23 20:07:11 +08:00
Will Miao
c6e5467907 fix(metadata): add MyOriginalWaifu prompt extractors 2026-04-23 16:05:40 +08:00
Will Miao
df0e5797d0 fix(nodes): save recipes synchronously from save image 2026-04-23 15:46:57 +08:00
Will Miao
ebdbb36271 fix(metadata): trace conditioning provenance for prompts 2026-04-23 14:41:54 +08:00
Will Miao
2eef629821 fix(checkpoints): singleflight pending hash calculation 2026-04-23 11:36:32 +08:00
Will Miao
658a04736d fix(recipes): save widget checkpoint metadata as dict 2026-04-23 11:20:20 +08:00
Will Miao
ef7f677933 chore(skills): add lora manager runtime context 2026-04-23 09:42:47 +08:00
Will Miao
63f0942452 fix(models): classify Anima as diffusion model 2026-04-23 07:35:34 +08:00
Will Miao
a1dff6dd47 fix(download): auto fetch example images after model download 2026-04-21 22:48:06 +08:00
Will Miao
7fa40023b0 fix(trigger-words): edit tag on double click 2026-04-21 22:31:56 +08:00
Will Miao
3c8acdb65e fix(trigger-words): support stable inline editing 2026-04-21 22:18:35 +08:00
Will Miao
1e9a7812d6 fix(model-modal): allow resizing notes editor 2026-04-21 21:42:06 +08:00
Will Miao
37f0e8f213 fix(trigger-words): raise group word limit 2026-04-21 16:35:25 +08:00
Will Miao
ecf7ea21e4 fix(duplicates): clear stale hash mismatch state (#900) 2026-04-21 16:22:04 +08:00
Will Miao
79dd9a1b29 fix(trigger-word-toggle): compact group editing for #907 2026-04-21 10:44:05 +08:00
Will Miao
ef4923fd94 fix(settings): normalize default root path comparisons 2026-04-21 09:43:37 +08:00
Will Miao
1eeba666f5 fix(network): restore destination-scoped memory download guard 2026-04-20 18:27:38 +08:00
pixelpaws
89e26d9292 Merge pull request #906 from willmiao/codex/github-mention-fixnetwork-add-connectivityguard-to-short
fix(network): return friendly offline message for memory downloads
2026-04-20 16:07:06 +08:00
pixelpaws
fc19a145ff Merge branch 'main' into codex/github-mention-fixnetwork-add-connectivityguard-to-short 2026-04-20 15:54:30 +08:00
Will Miao
34f03d6495 fix(settings): preserve extra default roots in comfyui sync 2026-04-20 15:48:30 +08:00
pixelpaws
9443175abc fix(network): return friendly offline message for memory downloads 2026-04-20 15:42:03 +08:00
pixelpaws
dc5072628f Merge pull request #905 from willmiao/codex/task-title
fix(network): add ConnectivityGuard to short‑circuit offline requests and reduce log spam
2026-04-20 15:41:38 +08:00
pixelpaws
ff4b8ec849 test(network): align cooldown short-circuit test with per-host guard 2026-04-20 15:30:50 +08:00
pixelpaws
7ab271c752 fix(network): scope connectivity cooldown by destination 2026-04-20 15:20:57 +08:00
pixelpaws
5a7f4dc88b fix(network): add offline cooldown guard for remote metadata requests 2026-04-20 15:04:04 +08:00
Will Miao
761108bfd1 fix(download): restore aria2 resume lifecycle 2026-04-20 09:52:48 +08:00
Will Miao
24dd3a777c fix(settings): align modal form control widths 2026-04-19 21:59:33 +08:00
Will Miao
1c530ea013 feat(download): add experimental aria2 backend 2026-04-19 21:46:09 +08:00
mudknight
0ced53c059 Use flex gap for header spacing (#901)
* Use flex gap for header spacing

* Remove extra margin
2026-04-18 19:33:39 +08:00
Will Miao
67ad68a23f fix(filters): apply preset base models from full list 2026-04-18 07:00:24 +08:00
pixelpaws
d9ec9c512e Merge pull request #899 from Phinease/fix/resumable-download-retries
fix: preserve resumable downloads across retries
2026-04-17 20:46:22 +08:00
Will Miao
0bcd8e09a9 fix(filters): improve base model filtering UX 2026-04-17 20:27:48 +08:00
Shuangrui CHEN
fa049a28c8 fix: preserve resumable downloads across retries 2026-04-17 03:35:41 +08:00
Will Miao
89fd2b43d6 chore(release): bump version to v1.0.5 and add release notes 2026-04-16 21:52:34 +08:00
Will Miao
c53f44e7ef feat(excluded-models): add excluded management view 2026-04-16 21:40:59 +08:00
Will Miao
ae7bfdb517 fix(download): normalize civitai.red download URLs (#898) 2026-04-16 18:25:16 +08:00
Will Miao
68bf8442eb chore(release): bump version to v1.0.4 and add release notes 2026-04-16 14:26:28 +08:00
Will Miao
605fbf4117 feat(civitai): add host preference for view links 2026-04-16 13:28:51 +08:00
Will Miao
406d5fea6a fix(civitai): use red-only api host (#897) 2026-04-16 12:08:07 +08:00
Will Miao
af2146f96c fix(civitai): fallback image info hosts on request failure 2026-04-16 09:29:03 +08:00
Will Miao
bdc8dec860 fix(civitai): support civitai.red URLs (#897) 2026-04-16 08:54:12 +08:00
Will Miao
c4fa1631ee chore: bump version to v1.0.3 2026-04-15 23:10:43 +08:00
Will Miao
506d763dc2 chore: add pyyaml dependency 2026-04-15 23:07:36 +08:00
Will Miao
a2cd09b619 docs: add v1.0.3 release notes 2026-04-15 22:52:04 +08:00
Will Miao
cdd77029b6 fix(autocomplete): improve wildcard onboarding UX 2026-04-15 22:25:25 +08:00
Will Miao
439679e15f fix(autocomplete): preserve manual accept-key selection 2026-04-15 21:19:00 +08:00
Will Miao
2640258902 fix(prompt): invalidate dynamic wildcard cache without seed (#895) 2026-04-15 20:43:21 +08:00
Will Miao
b910388d54 fix(autocomplete): remove short prompt command aliases (#895) 2026-04-15 20:43:03 +08:00
Will Miao
083de395b1 chore(logging): remove autocomplete debug logs (#895) 2026-04-15 20:42:55 +08:00
Will Miao
4514ca94b7 fix(autocomplete): reduce tag search overhead (#895) 2026-04-15 20:42:33 +08:00
Will Miao
62247bdd87 feat(prompt): expand wildcards at runtime (#895) 2026-04-15 20:42:27 +08:00
Will Miao
6d0d9600a7 fix(versions): clarify tab hover states and copy 2026-04-13 21:12:13 +08:00
Will Miao
70cd3f4e1b fix(download-history): use title for downloaded tooltip 2026-04-13 20:26:40 +08:00
Will Miao
a95c518b30 feat(download-history): add downloaded status UX 2026-04-13 19:51:04 +08:00
Will Miao
ba1800095e fix(recipes): preserve scroll on in-place reloads 2026-04-13 10:30:50 +08:00
Will Miao
39c083db79 fix(recipes): preserve legacy gen params in modal flows 2026-04-12 21:25:54 +08:00
Will Miao
55e9e4bb6f fix(recipes): sanitize remote import gen params 2026-04-12 20:29:01 +08:00
Will Miao
0253d001e6 fix(recipe): hydrate stale modal data from recipe json 2026-04-12 19:22:58 +08:00
Will Miao
9998da3241 fix(ui): refresh stale model page versions 2026-04-11 20:11:21 +08:00
Will Miao
6666a72775 fix(doctor): center status badge 2026-04-11 16:28:14 +08:00
Will Miao
5f1bd894b9 fix(settings): prevent library modal focus jump 2026-04-11 16:20:37 +08:00
Will Miao
1817142a7b feat(doctor): add system diagnostics feature 2026-04-11 16:03:38 +08:00
Will Miao
25fa175aa2 fix(usage): resolve checkpoint hashes from disk 2026-04-10 22:28:04 +08:00
Will Miao
39643eb2bc fix(metadata): recover prompts through scheduled guidance 2026-04-10 21:36:42 +08:00
Will Miao
4ac78f8aa8 fix(settings): reserve scrollbar space in settings content 2026-04-10 21:13:48 +08:00
Will Miao
0bcca0ba68 fix(settings): clarify backup scope in UI 2026-04-10 21:04:11 +08:00
Will Miao
72f8e0d1be fix(backup): add user-state backup UI and storage 2026-04-10 20:49:30 +08:00
Will Miao
85b6c91192 fix(download): add ZImageBase to diffusion model routing (#892) 2026-04-10 08:55:28 +08:00
Will Miao
908016cbd6 fix(recipe modal): compact layout on short viewports (#891) 2026-04-09 22:46:25 +08:00
Will Miao
a5ac9cf81b Revert "fix(recipes): make recipe modal viewport-safe (#891)"
This reverts commit 51fe7aa07e.
2026-04-09 22:28:29 +08:00
Will Miao
32875042bd feat(metadata): support PromptAttention CLIP encoder 2026-04-09 19:21:25 +08:00
Will Miao
51fe7aa07e fix(recipes): make recipe modal viewport-safe (#891) 2026-04-09 19:14:12 +08:00
Will Miao
db4726a961 feat(recipes): add configurable storage path migration 2026-04-09 15:57:37 +08:00
Will Miao
e13d70248a fix(usage-stats): resolve pending checkpoint hashes 2026-04-08 09:40:20 +08:00
pixelpaws
1c4919a3e8 Merge pull request #887 from NubeBuster/feat/usage-extractors
feat(usage-stats): add extractors for rgthree Power LoRA Loader and TensorRT loaders
2026-04-08 09:32:08 +08:00
Will Miao
18ddadc9ec feat(autocomplete): auto-format textarea on blur (#884) 2026-04-08 07:57:28 +08:00
Will Miao
b6dd6938b0 docs: add v1.0.2 release notes, bump version to 1.0.2 2026-04-06 20:14:26 +08:00
NubeBuster
b711ac468a feat(usage-stats): add extractors for rgthree Power LoRA Loader and TensorRT Loader
Fixes #394 — LoRAs loaded via rgthree Power Lora Loader were not
tracked in usage statistics because no extractor existed for that node.

New extractors:
- RgthreePowerLoraLoaderExtractor: parses LORA_* kwargs, respects
  the per-LoRA 'on' toggle
- TensorRTLoaderExtractor: parses engine filename (strips _$profile
  suffix) as best-effort for vanilla TRT. If the output MODEL has
  attachments["source_model"] (set by NubeBuster fork), overrides
  with the real checkpoint name.

TensorRTRefitLoader and TensorRTLoaderAuto take a MODEL input whose
upstream checkpoint loader is already tracked — no extractor needed.

Also adds a name:<filename> fallback and warning log in both
_process_checkpoints and _process_loras when hash lookup fails.
2026-04-05 16:45:21 +02:00
Will Miao
727d0ef043 feat(misc): add model download status aggregation 2026-04-03 22:17:09 +08:00
Will Miao
9344d86332 test(misc): cover model existence download status 2026-04-03 22:16:09 +08:00
Will Miao
d36b16c213 feat(settings): skip previously downloaded model versions 2026-04-03 19:01:19 +08:00
Will Miao
33a7f07558 feat(download-history): track downloaded model versions 2026-04-03 16:13:14 +08:00
Will Miao
4f599aeced fix(trigger-words): propagate LORA_STACK updates through combiners (#881) 2026-04-03 15:01:02 +08:00
Will Miao
30db8c3d1d fix(csp): support CivitAI CDN subdomains for example images (#822)
- Update CSP whitelist to use wildcard *.civitai.com for all CDN subdomains
- Fix hostname parsing to use parsed.hostname instead of parsed.netloc (handles ports)
- Update rewrite_preview_url() to support all CivitAI CDN subdomains
- Update rewriteCivitaiUrl() frontend function to support subdomains
- Add comprehensive tests for edge cases (ports, subdomains, invalid URLs)
- Add security note explaining wildcard CSP design decision

Fixes CSP blocking of images from image-b2.civitai.com and other CDN subdomains
2026-04-03 09:40:15 +08:00
Will Miao
05636712f0 docs: fix formatting in v1.0.1 release notes 2026-04-02 11:59:29 +08:00
Will Miao
d8e5fe1247 docs: add v1.0.1 release notes, bump version to 1.0.1 2026-04-02 11:54:04 +08:00
Will Miao
3e9210394a feat(settings): Improve Extra Folder Paths UX with restart indicators
- Replace tooltip with restart-required icon for better visibility
- Update descriptions to accurately reflect feature purpose
- Fix toast message to show correct restart notification
- Sync i18n keys across all supported languages
2026-04-02 08:57:04 +08:00
Will Miao
4dd2c0526f chore(supporters): Update supporters 2026-04-01 22:56:20 +08:00
Will Miao
9bdb337962 fix(settings): enforce valid default model roots 2026-04-01 20:36:37 +08:00
Will Miao
f93baf5fc0 chore(workflow): Update example workflows 2026-04-01 15:39:20 +08:00
Will Miao
14cb7fec47 feat(cycler): add preset strength scale (#865) 2026-04-01 11:05:38 +08:00
Will Miao
f3b3e0adad fix(randomizer): defer UI updates until workflow completion (fixes #824) 2026-04-01 10:29:27 +08:00
Will Miao
ba3f15dbc6 feat(checkpoints): add 'Send to Workflow' option in context menu
- Add 'Send to Workflow' menu item to checkpoint context menu (templates/checkpoints.html)
- Implement sendCheckpointToWorkflow() method in CheckpointContextMenu.js
- Use unified 'Model' terminology for toast messages instead of differentiating checkpoint/diffusion model
- Add translation keys: checkpoints.contextMenu.sendToWorkflow, uiHelpers.workflow.modelUpdated, modelFailed
- Complete translations for all 10 locales (en, zh-CN, zh-TW, ja, ko, de, fr, es, ru, he)
2026-03-31 19:52:20 +08:00
Will Miao
8dc2a2f76b fix(recipe): show checkpoint-linked recipes in model modal (#851) 2026-03-31 16:45:01 +08:00
Will Miao
316f17dd46 fix(recipe): Import LoRAs from Civitai image URLs using modelVersionIds (#868)
When importing recipes from Civitai image URLs, the API returns modelVersionIds
at the root level instead of inside the meta object. This caused LoRA information
to not be recognized and imported.

Changes:
- analysis_service.py: Merge modelVersionIds from image_info into metadata
- civitai_image.py: Add modelVersionIds field recognition and processing logic
- test_civitai_image_parser.py: Add test for modelVersionIds handling
2026-03-31 14:34:13 +08:00
Will Miao
3dc10b1404 feat(recipe): add editable prompts in recipe modal (#869) 2026-03-31 14:11:56 +08:00
Will Miao
331889d872 chore(i18n): improve recursive toggle button labels for clarity (#875)
Update translations for sidebar recursive toggle from 'Search subfolders'
to 'Include subfolders' / 'Current folder only' across all 10 languages.

This better describes the actual functionality - controlling whether
models/recipes from subfolders are included in the current view.

Related to #875
2026-03-30 15:26:15 +08:00
Will Miao
06f1a82d4c fix(tests): add missing MODEL_TYPES mock in ModelModal tests
Add mock for apiConfig.js MODEL_TYPES constant in test files to fix
'Cannot read properties of undefined' errors when running npm test.

- tests/frontend/components/modelMetadata.renamePath.test.js
- tests/frontend/components/modelModal.licenseIcons.test.js
2026-03-30 08:37:12 +08:00
Will Miao
267082c712 feat: add 'Send to ComfyUI' button to ModelModal and RecipeModal
- Add send button to ModelModal header for all model types (LoRA, Checkpoint, Embedding)
- Add send button to RecipeModal header for sending entire recipes
- Style buttons to match existing modal action buttons
- Add translations for all supported languages
2026-03-29 20:35:08 +08:00
Will Miao
a4cb51e96c fix(nodes): preserve autocomplete widget values across workflow restore 2026-03-29 19:25:30 +08:00
Will Miao
ca44c367b3 fix(recipe): improve Civitai URL generation for missing LoRAs
Use model-versions endpoint (https://civitai.com/model-versions/{id}) which
auto-redirects to the correct model page when only versionId is available.

This fixes the UX issue where clicking on 'Not in Library' LoRA entries in
Recipe Modal would open a search page instead of the actual model page.

Changes:
- uiHelpers.js: Prioritize versionId over modelId for Civitai URLs
- RecipeModal.js: Include versionId in navigation condition checks
2026-03-29 15:33:30 +08:00
Will Miao
301ab14781 fix(nodes): restore autocomplete widget sync after metadata insertion (#879) 2026-03-29 10:09:39 +08:00
Will Miao
2626dbab8e feat: add lora stack combiner node 2026-03-29 08:28:00 +08:00
Will Miao
12bbb0572d fix: Add missing mock for getMappableBaseModelsDynamic in tests (#854)
- Add getMappableBaseModelsDynamic to constants.js mocks in test files
- Remove refs/enums.json temporary file from repository

Fixes test failures introduced in previous commit.
2026-03-29 00:24:20 +08:00
Will Miao
00f5c1e887 feat: Dynamic base model fetching from Civitai API (#854)
Implement automatic fetching of base models from Civitai API to keep
data up-to-date without manual updates.

Backend:
- Add CivitaiBaseModelService with 7-day TTL caching
- Add /api/lm/base-models endpoints for fetching and refreshing
- Merge hardcoded and remote models for backward compatibility
- Smart abbreviation generation for unknown models

Frontend:
- Add civitaiBaseModelApi client for API communication
- Dynamic base model loading on app initialization
- Update SettingsManager to use merged model lists
- Add support for 8 new models: Anima, CogVideoX, LTXV 2.3, Mochi,
  Pony V7, Wan Video 2.5 T2V/I2V

API Endpoints:
- GET /api/lm/base-models - Get merged models
- POST /api/lm/base-models/refresh - Force refresh
- GET /api/lm/base-models/categories - Get categories
- GET /api/lm/base-models/cache-status - Check cache status

Closes #854
2026-03-29 00:18:15 +08:00
Will Miao
89b1675ec7 fix: wheel zoom behavior for LoRA Manager widgets
- Add forwardWheelToCanvas() utility for vanilla JS widgets
- Implement wheel event handling in Vue widgets (LoraCyclerWidget, LoraRandomizerWidget, LoraPoolWidget)
- Update SingleSlider and DualRangeSlider to stop event propagation after value adjustment
- Ensure consistent behavior: slider adjusts value only, other areas trigger canvas zoom
- Support pinch-to-zoom (Ctrl+wheel) and horizontal scroll forwarding
2026-03-28 22:42:26 +08:00
Will Miao
dcc7bd33b5 fix(autocomplete): make accept key behavior configurable (#863) 2026-03-28 20:21:23 +08:00
Will Miao
e5152108ba fix(autocomplete): treat newline as a hard boundary 2026-03-28 19:29:30 +08:00
Will Miao
1ed5eef985 feat(autocomplete): support Tab accept and configurable suffix behavior (#863) 2026-03-28 19:18:23 +08:00
Will Miao
a82f89d14a fix(nodes): expose save image outputs to generated assets 2026-03-28 14:28:48 +08:00
Will Miao
16e30ea689 fix(nodes): add save_with_metadata toggle to save image 2026-03-28 11:17:36 +08:00
pixelpaws
ad3bdddb72 Merge pull request #876 from willmiao/codex/analyze-issue-869-on-github
Handle Enter on tag input to add tags and add unit tests
2026-03-27 19:55:12 +08:00
pixelpaws
9121306b06 Guard Enter tag add during IME composition 2026-03-27 19:52:53 +08:00
Will Miao
ca0baf9462 fix(nodes): lazy load qwen lora helper 2026-03-27 19:44:05 +08:00
pixelpaws
20e50156a2 fix(recipes): allow Enter to add import tags 2026-03-27 19:28:58 +08:00
Will Miao
0b66bf5479 chore: update AGENTS commit guidance 2026-03-27 19:26:13 +08:00
Will Miao
1e8aca4787 Add experimental Nunchaku Qwen LoRA support (#873) 2026-03-27 19:24:43 +08:00
Will Miao
76ee59cdb9 fix(paths): deduplicate LoRA path overlap (#871) 2026-03-27 17:35:24 +08:00
Will Miao
a5191414cc feat(download): add configurable base model download exclusions 2026-03-26 23:07:12 +08:00
Will Miao
5b065b47d4 feat(i18n): complete translations for mature blur threshold setting
Add translations for the new mature_blur_level setting across all
supported languages:
- zh-CN: 成人内容模糊阈值
- zh-TW: 成人內容模糊閾值
- ja: 成人コンテンツぼかし閾値
- ko: 성인 콘텐츠 블러 임계값
- de: Schwelle für Unschärfe bei jugendgefährdenden Inhalten
- fr: Seuil de floutage pour contenu adulte
- es: Umbral de difuminado para contenido adulto
- ru: Порог размытия взрослого контента
- he: סף טשטוש תוכן מבוגרים

Completes TODOs from previous commit.
2026-03-26 18:40:33 +08:00
Will Miao
ceeab0c998 feat: add configurable mature blur threshold setting
Add new setting 'mature_blur_level' with options PG13/R/X/XXX to control
which NSFW rating level triggers blur filtering when NSFW blur is enabled.

- Backend: update preview selection logic to respect threshold
- Frontend: update UI components to use configurable threshold
- Settings: add validation and normalization for mature_blur_level
- Tests: add coverage for new threshold behavior
- Translations: add keys for all supported languages

Fixes #867
2026-03-26 18:24:47 +08:00
Will Miao
3b001a6cd8 fix(tests): update tests to match current download implementation
- Remove calculate_sha256 mocking from download_manager tests since
  SHA256 now comes from API metadata (not recalculated during download)
- Update chunk_size assertion from 4MB to 16MB in downloader config test
2026-03-26 18:00:04 +08:00
Will Miao
95e5bc26d1 feat: Add bulk download missing LoRAs feature for recipes
- Add BulkMissingLoraDownloadManager.js for handling bulk LoRA downloads
- Add context menu item to bulk mode for downloading missing LoRAs
- Add confirmation modal with deduplicated LoRA list preview
- Implement sequential downloading with WebSocket progress updates
- Fix CSS class naming conflicts to avoid import-modal.css collision
- Update translations for 9 languages (en, zh-CN, zh-TW, ja, ko, ru, de, fr, es, he)
- Style modal without internal scrolling for better UX
2026-03-26 17:46:53 +08:00
Will Miao
de3d0571f8 fix: verify returned image ID matches requested ID in CivitAI API
Fix issue #870 where importing recipes from CivitAI image URLs would
return the wrong image when the API response did not contain the
requested image ID.

The get_image_info() method now:
- Iterates through all returned items to find matching ID
- Returns None when no match is found and logs warning with returned IDs
- Handles invalid (non-numeric) ID formats

New test cases:
- test_get_image_info_returns_matching_item
- test_get_image_info_returns_none_when_id_mismatch
- test_get_image_info_handles_invalid_id

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-25 20:37:51 +08:00
Will Miao
6f2a01dc86 优化下载性能:移除 SHA256 计算并使用 16MB chunks
- 移除下载后的 SHA256 计算,直接使用 API 返回的 hash 值
- 将 chunk size 从 4MB 调整为 16MB,减少 75% 的 I/O 操作
- 这有助于缓解 ComfyUI 执行期间的卡顿问题
2026-03-25 19:29:48 +08:00
Will Miao
c5c1b8fd2a Fix: border corner clipping in duplicate recipe warning
Fix the bottom corners of duplicate warning border being clipped
due to parent container overflow:hidden and mismatched border-radius.

- Changed border-radius from top-only to all corners
- Ensures yellow border displays fully without being cut off
2026-03-25 13:57:38 +08:00
Will Miao
e97648c70b feat(import): add import-only option for recipes without downloading missing LoRAs
Add dual-button design in recipe import flow:
- Details step: [Import Recipe Only] [Import & Download]
- Location step: [Back] [Import & Download] (removed redundant Import Only)

Changes:
- templates/components/import_modal.html: Add secondary button for import-only
- static/js/managers/ImportManager.js: Add saveRecipeOnlyFromDetails() method
- static/js/managers/import/RecipeDataManager.js: Update button state management
- static/js/managers/import/DownloadManager.js: Support skipDownload flag
- locales/*.json: Complete all translation TODOs

Closes #868
2026-03-25 11:56:34 +08:00
Will Miao
8b85e083e2 feat(recipe-parser): add SuiImage metadata format support
- Add SuiImageParamsParser for sui_image_params JSON format
- Register new parser in RecipeParserFactory
- Fix metadata_provider auto-initialization when not ready
- Add 10 test cases for SuiImageParamsParser

Fixes batch import failure for images with sui_image_params metadata.
2026-03-25 08:43:33 +08:00
Will Miao
9112cd3b62 chore: Add .claude/ to gitignore
Exclude Claude Code personal configuration directory containing:
- settings.local.json (personal permissions and local paths)
- skills/ (personal skills)

These contain machine-specific paths and personal preferences
that should not be shared across the team.
2026-03-22 14:17:15 +08:00
Will Miao
7df4e8d037 fix(metadata_hook): correct function signature to fix bound method error
Fix issue #866 where the metadata hook's async wrapper used *args/**kwargs
which caused AttributeError when ComfyUI's make_locked_method_func tried
to access __func__ on the func parameter.

The async_map_node_over_list_with_metadata wrapper now uses the exact
same signature as ComfyUI's _async_map_node_over_list:
- Removed: *args, **kwargs
- Added: explicit v3_data=None parameter

This ensures the func parameter (always a string like obj.FUNCTION) is
passed correctly to make_locked_method_func without any type conversion.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-22 13:25:04 +08:00
Will Miao
4000b7f7e7 feat: Add configurable LoRA strength adjustment step setting
Implements issue #808 - Allow users to customize the strength
variation range for LoRA widget arrow buttons.

Changes:
- Add 'Strength Adjustment Step' setting (0.01-0.1) in settings.js
- Replace hardcoded 0.05 increments with configurable step value
- Apply to both LoRA strength and CLIP strength controls

Fixes #808
2026-03-19 17:33:18 +08:00
Will Miao
76c15105e6 feat(lora-pool): add regex include/exclude name pattern filtering (#839)
Add name pattern filtering to LoRA Pool node allowing users to filter
LoRAs by filename or model name using either plain text or regex patterns.

Features:
- Include patterns: only show LoRAs matching at least one pattern
- Exclude patterns: exclude LoRAs matching any pattern
- Regex toggle: switch between substring and regex matching
- Case-insensitive matching for both modes
- Invalid regex automatically falls back to substring matching
- Filters apply to both file_name and model_name fields

Backend:
- Update LoraPoolLM._default_config() with namePatterns structure
- Add name pattern filtering to _apply_pool_filters() and _apply_specific_filters()
- Add API parameter parsing for name_pattern_include/exclude/use_regex
- Update LoraPoolConfig type with namePatterns field

Frontend:
- Add NamePatternsSection.vue component with pattern input UI
- Update useLoraPoolState to manage pattern state and API integration
- Update LoraPoolSummaryView to display NamePatternsSection
- Increase LORA_POOL_WIDGET_MIN_HEIGHT to accommodate new UI

Tests:
- Add 7 test cases covering text/regex include, exclude, combined
  filtering, model name fallback, and invalid regex handling

Closes #839
2026-03-19 17:15:05 +08:00
Will Miao
b11c90e19b feat: add type ignore comments and remove unused imports
- Add `# type: ignore` comments to comfy.sd and folder_paths imports
- Remove unused imports: os, random, and extract_lora_name
- Clean up import statements across checkpoint_loader, lora_randomizer, and unet_loader nodes
2026-03-19 15:54:49 +08:00
pixelpaws
9f5d2d0c18 Merge pull request #862 from EnragedAntelope/claude/add-webp-image-support-t8kG9
Improve webp image support
2026-03-19 15:35:16 +08:00
Will Miao
a0dc5229f4 feat(unet_loader): move torch import inside methods for lazy loading
- Delay torch import until needed in load_unet and load_unet_gguf methods
- This improves module loading performance by avoiding unnecessary imports
- Maintains functionality while reducing initial import overhead
2026-03-19 15:29:41 +08:00
Will Miao
61c31ecbd0 fix: exclude __init__.py from pytest collection to prevent CI import errors 2026-03-19 14:43:45 +08:00
Will Miao
1ae1b0d607 refactor: move No LoRA feature from LoRA Pool to Lora Cycler widget
Move the 'empty/no LoRA' cycling functionality from the LoRA Pool node
to the Lora Cycler widget for cleaner architecture:

Frontend changes:
- Add include_no_lora field to CyclerConfig interface
- Add includeNoLora state and logic to useLoraCyclerState composable
- Add toggle UI in LoraCyclerSettingsView with special styling
- Show 'No LoRA' entry in LoraListModal when enabled
- Update LoraCyclerWidget to integrate new logic

Backend changes:
- lora_cycler.py reads include_no_lora from config
- Calculate effective_total_count (actual count + 1 when enabled)
- Return empty lora_stack when on No LoRA position
- Return actual LoRA count in total_count (not effective count)

Reverted files to pre-PR state:
- lora_loader.py, lora_pool.py, lora_randomizer.py, lora_stacker.py
- lora_routes.py, lora_service.py
- LoraPoolWidget.vue and related files

Related to PR #861

Co-authored-by: dogatech <dogatech@dogatech.home>
2026-03-19 14:19:49 +08:00
dogatech
8dd849892d Allow for empty lora (no loras option) in Lora Pool 2026-03-19 09:23:03 +08:00
Will Miao
03e1fa75c5 feat: auto-focus URL input when batch import modal opens 2026-03-18 22:33:45 +08:00
Will Miao
fefcaa4a45 fix: improve Civitai recipe import by extracting EXIF when API metadata is empty
- Add validation to check if Civitai API metadata contains recipe fields
- Fall back to EXIF extraction when API returns empty metadata (meta.meta=null)
- Improve error messages to distinguish between missing metadata and unsupported format
- Add _has_recipe_fields() helper method to validate metadata content

This fixes import failures for Civitai images where the API returns
metadata wrapper but no actual generation parameters (e.g., images
edited in Photoshop that lost their original generation metadata)
2026-03-18 22:30:36 +08:00
Will Miao
701a6a6c44 refactor: remove GGUF loading logic from CheckpointLoaderLM
GGUF models are pure Unet models and should be handled by UNETLoaderLM.
2026-03-18 21:36:07 +08:00
Will Miao
0ef414d17e feat: standardize Checkpoint/Unet loader names and use OS-native path separators
- Rename nodes to 'Checkpoint Loader (LoraManager)' and 'Unet Loader (LoraManager)'\n- Use os.sep for relative path formatting in model COMBO inputs\n- Update path matching to be robust across OS separators\n- Update docstrings and comments
2026-03-18 21:33:19 +08:00
Will Miao
75dccaef87 test: fix cache validator tests to account for new hash_status field and side effects 2026-03-18 21:10:56 +08:00
Will Miao
7e87ec9521 fix: persist hash_status in model cache to support lazy hashing on restart 2026-03-18 21:07:40 +08:00
Will Miao
46522edb1b refactor: simplify GGUF import helper with dynamic path detection
- Add _get_gguf_path() to dynamically derive ComfyUI-GGUF path from current file location
- Remove Strategy 2 and 3, keeping only Strategy 1 (sys.modules path-based lookup)
- Remove hard-coded absolute paths
- Streamline logging output
- Code cleanup: reduced from 235 to 154 lines
2026-03-18 19:55:54 +08:00
Will Miao
2dae4c1291 fix: isolate extra unet paths from checkpoints to prevent type misclassification
Refactor _prepare_checkpoint_paths() to return a tuple instead of having
side effects on instance variables. This prevents extra unet paths from
being incorrectly classified as checkpoints when processing extra paths.

- Changed return type from List[str] to Tuple[List[str], List[str], List[str]]
  (all_paths, checkpoint_roots, unet_roots)
- Updated _init_checkpoint_paths() and _apply_library_paths() callers
- Fixed extra paths processing to properly isolate main and extra roots
- Updated test_checkpoint_path_overlap.py tests for new API

This ensures models in extra unet paths are correctly identified as
diffusion_model type and don't appear in checkpoints list.
2026-03-17 22:03:57 +08:00
EnragedAntelope
a32325402e Merge branch 'willmiao:main' into claude/add-webp-image-support-t8kG9 2026-03-17 08:37:46 -04:00
Will Miao
70c150bd80 fix(services): implement stable sorting for model and recipe caches
Add file_path as a tie-breaker for all sort modes in ModelCache, BaseModelService, LoraService, and RecipeCache to ensure deterministic ordering when primary keys are identical. Resolves issue #859.
2026-03-17 14:20:23 +08:00
Will Miao
9e81c33f8a fix(utils): make sanitize_folder_name idempotent by combining strip/rstrip calls 2026-03-17 11:24:59 +08:00
Will Miao
22c0dbd734 feat(recipes): persist 'Skip images without metadata' choice in batch import 2026-03-17 11:01:41 +08:00
Will Miao
d0c58472be fix(i18n): add missing common.actions.close translation key 2026-03-17 09:57:27 +08:00
Will Miao
b3c530bf36 fix(autocomplete): handle multi-word tag matching with normalized spaces
- Replace multiple consecutive spaces with single underscore for tag matching
  (e.g., 'looking  to   the side' → 'looking_to_the_side')
- Support prefix/suffix matching for flexible multi-word autocomplete
  (e.g., 'looking to the' → 'looking_to_the_side')
- Add comprehensive test coverage for multi-word scenarios

Test coverage:
- Multi-word exact match (Danbooru convention)
- Partial match with last token replacement
- Command mode with multi-word phrases
- Multiple consecutive spaces handling
- Backend LOG10 popularity weight validation

Fixes: 'looking to the side' input now correctly replaces with
'looking_to_the_side, ' (or 'looking to the side, ' with space replacement)
2026-03-17 09:34:01 +08:00
Claude
05ebd7493d chore: update package-lock.json after npm install
https://claude.ai/code/session_01SgT2pkisi27bEQELX5EeXZ
2026-03-17 01:33:34 +00:00
Claude
90986bd795 feat: add case-insensitive webp support for lora cover photos
Make preview file discovery case-insensitive so files with uppercase
extensions like .WEBP are found on case-sensitive filesystems. Also
explicitly list image/webp in the file picker accept attribute for
broader browser compatibility.

https://claude.ai/code/session_01SgT2pkisi27bEQELX5EeXZ
2026-03-17 01:32:48 +00:00
Will Miao
b5a0725d2c fix(autocomplete): improve tag search ranking with popularity-based sorting
- Add LOG10(post_count) weighting to BM25 score for better relevance ranking
- Prioritize tag_name prefix matches above alias matches using CASE statement
- Remove frontend re-scoring logic to trust backend排序 results
- Fix pagination consistency: page N+1 scores <= page N minimum score

Key improvements:
- '1girl' (6M posts) now ranks #1 instead of #149 for search '1'
- tag_name prefix matches always appear before alias matches
- Popular tags rank higher than obscure ones with same prefix
- Consistent ordering across pagination boundaries

Test coverage:
- Add test_search_tag_name_prefix_match_priority
- Add test_search_ranks_popular_tags_higher
- Add test_search_pagination_ordering_consistency
- Add test_search_rank_score_includes_popularity_weight
- Update test data with 15 tags starting with '1'

Fixes issues with autocomplete dropdown showing inconsistent results
when scrolling through paginated search results.
2026-03-16 19:09:07 +08:00
Will Miao
ef38bda04f docs: remove redundant example metadata files (#856)
- Delete examples/metadata/ directory and all example files
  - Real metadata.json files in model roots are better examples
  - Examples were artificial and could become outdated
  - Maintenance burden outweighs benefit

- Remove 'Complete Examples' section from docs/metadata-json-schema.md
- Remove reference to example files in 'See Also' section

Rationale:
Users have access to real-world metadata.json files in their actual
model directories, which contain complete Civitai API responses with
authentic data structures (images arrays with prompts, files with hashes,
creator information, etc.). These are more valuable than simplified
artificial examples.
2026-03-16 09:41:58 +08:00
Will Miao
58713ea6e0 fix(top-menu): use dynamic imports to eliminate deprecation warnings
- Replace static imports of deprecated ComfyButton and ComfyButtonGroup with dynamic imports
- Only loads legacy API files when frontend version < 1.33.9 (backward compatibility path)
- Frontend >= 1.33.9 users no longer see deprecation warnings since legacy code is never loaded
- Preserves full backward compatibility for older ComfyUI frontend versions
- All existing tests pass (159 JS + 65 Vue tests)
2026-03-16 09:41:58 +08:00
Will Miao
8b91920058 docs: add comprehensive metadata.json schema documentation (#856)
- Create docs/metadata-json-schema.md with complete field reference
  - All base fields for LoRA, Checkpoint, and Embedding models
  - Complete civitai object structure with Used vs Stored field classification
  - Model-level fields (allowCommercialUse, allowDerivatives, etc.)
  - Creator fields (username, image)
  - customImages structure with actual field names and types
  - Field behavior categories (Auto-Updated, Set Once, User-Editable)

- Add .specs/metadata.schema.json for programmatic validation
  - JSON Schema draft-07 format
  - oneOf schemas for each model type
  - Definitions for civitaiObject and usageTips

- Add example metadata files for each model type
  - lora-civitai.json: LoRA with full Civitai data
  - lora-custom.json: User-defined LoRA with trigger words
  - lora-no-triggerwords.json: LoRA without trigger words
  - checkpoint-civitai.json: Checkpoint from Civitai
  - embedding-custom.json: Custom embedding

Key clarifications:
  - modified: Import timestamp (Set Once, never changes after import)
  - size: File size at import time (Set Once)
  - base_model: Optional with actual values (SDXL 1.0, Flux.1 D, etc.)
  - model_type: Used in metadata.json (not sub_type which is internal)
  - allowCommercialUse: ["Image", "Video", "RentCivit", "Rent"]
  - civitai.files/images: Marked as Used by Lora Manager
  - User-editable fields clearly documented (model_name, tags, etc.)
2026-03-16 09:41:58 +08:00
Will Miao
ee466113d5 feat: implement batch import recipe functionality (frontend + backend fixes)
Backend fixes:
- Add missing API route for /api/lm/recipes/batch-import/progress (GET)
- Add missing API route for /api/lm/recipes/batch-import/directory (POST)
- Add missing API route for /api/lm/recipes/browse-directory (POST)
- Register WebSocket endpoint for batch import progress
- Fix skip_no_metadata default value (True -> False) to allow no-LoRA imports
- Add items array to BatchImportProgress.to_dict() for detailed results

Frontend implementation:
- Create BatchImportManager.js with complete batch import workflow
- Add directory browser UI for selecting folders
- Add batch import modal with URL list and directory input modes
- Implement real-time progress tracking (WebSocket + HTTP polling)
- Add results summary with success/failed/skipped statistics
- Add expandable details view showing individual item status
- Auto-refresh recipe list after import completion

UI improvements:
- Add spinner animation for importing status
- Simplify results summary UI to match progress stats styling
- Fix current item text alignment
- Fix dark theme styling for directory browser button
- Fix batch import button styling consistency

Translations:
- Add batch import related i18n keys to all locale files
- Run sync_translation_keys.py to sync all translations

Fixes:
- Batch import now allows images without LoRAs (matches single import behavior)
- Progress endpoint now returns complete items array with status details
- Results view correctly displays skipped items with error messages
2026-03-16 09:41:58 +08:00
Will Miao
f86651652c feat(batch-import): implement backend batch import service with adaptive concurrency
- Add BatchImportService with concurrent execution using asyncio.gather
- Implement AdaptiveConcurrencyController with dynamic adjustment
- Add input validation for URLs and local paths
- Support duplicate detection via skip_duplicates parameter
- Add WebSocket progress broadcasting for real-time updates
- Create comprehensive unit tests for batch import functionality
- Update API handlers and route registrations
- Add i18n translation keys for batch import UI
2026-03-16 09:41:58 +08:00
Will Miao
c89d4dae85 fix(extra-paths): support trigger words for LoRAs in extra folder paths, fixes #860
- Update get_lora_info() to check both loras_roots and extra_loras_roots
- Add fallback logic to return trigger words even if path not in recognized roots
- Ensure Trigger Word Toggle node displays trigger words for LoRAs from extra folder paths

Fixes issue where LoRAs added from extra folder paths would not show their trigger words in connected Trigger Word Toggle nodes.
2026-03-16 09:38:21 +08:00
pixelpaws
55a18d401b Merge pull request #858 from botchedchuckle/patch-1
Fix: Escape HTML in Prompt/NegativePrompt for MetadataPanel
2026-03-14 14:43:46 +08:00
botchedchuckle
7570936c75 Fix: Escape HTML in Prompt/NegativePrompt for MetadataPanel
* Fixed a bug where `prompt` and `negativePrompt` were both being
  added directly to HTML without escaping them. Given prompts are
  allowed to have HTML characters (e.g. `<lora:something:0.75>`), by
  forgetting to escape them some tags were missing in the metadata
  views for example images using those characters.
2026-03-13 01:29:04 -07:00
Will Miao
4fcf641d57 fix(bulk-context-menu): escape special characters in data-filepath selector to support double quotes in filenames (#845) 2026-03-12 08:49:10 +08:00
Will Miao
5c29e26c4e fix(top-menu): add backward compatibility for actionBarButtons API (#853)
- Implement version detection using __COMFYUI_FRONTEND_VERSION__ and /system_stats API
- Add version parsing and comparison utilities
- Dynamically register extension based on frontend version
- Use actionBarButtons API for frontend >= 1.33.9
- Fallback to legacy ComfyButton approach for older versions
- Add comprehensive version detection tests
2026-03-12 07:41:29 +08:00
Will Miao
ee765a6d22 fix(sidebar): escape folder names and paths to support double quotes
- Import and use escapeHtml and escapeAttribute in SidebarManager.js
- Escape data-path and title attributes in folder tree and breadcrumbs
- Use CSS.escape() for attribute selectors in updateTreeSelection
- Fixes issue #843 where folders with double quotes broke navigation
2026-03-11 23:33:11 +08:00
Will Miao
c02f603ed2 fix(autocomplete): add wheel event handler for canvas zoom support
Add @wheel event listener to AutocompleteTextWidget textarea to enable canvas zoom when textarea has no scrollbar.

The onWheel handler:
- Forwards pinch-to-zoom (ctrl+wheel) to canvas
- Passes horizontal scroll to canvas
- When textarea has vertical scrollbar: lets textarea scroll
- When textarea has NO scrollbar: forwards to canvas for zoom

Behavior now matches ComfyUI built-in multiline widget.

Fixes #850
2026-03-11 20:58:01 +08:00
Will Miao
ee84b30023 Fix node selector z-index issue in recipe modal
Change node-selector z-index from 1000 to var(--z-overlay) (2000)
to ensure the model selector UI appears above the recipe modal
when sending checkpoints to workflow with multiple targets.
2026-03-09 19:29:13 +08:00
Will Miao
97979d9e7c fix(send-to-workflow): strip file extension before searching relative paths
Backend _relative_path_matches_tokens() removes extensions from paths
before matching (commit 43f6bfab). This fix ensures frontend also
removes extensions from search terms to avoid matching failures.

Fixes issue where send model to workflow would receive absolute
paths instead of relative paths because the API returned empty
results when searching with file extension.
2026-03-09 15:49:37 +08:00
Will Miao
cda271890a feat(workflow-template): add new tab template workflow with auto-zoom
- Add GET /api/lm/example-workflows endpoint to list available templates
- Add GET /api/lm/example-workflows/{filename} to retrieve specific workflow
- Add 'New Tab Template Workflow' setting in LoRA Manager settings
- Automatically apply 80% zoom level when loading template workflows
- Override workflow's saved view settings to prevent visual zoom flicker

The feature allows users to select a template workflow from example_workflows/
directory to load when creating new workflow tabs, with a hardcoded 0.8 zoom
level for better initial view experience.
2026-03-08 21:03:14 +08:00
Will Miao
2fbe6c8843 fix(autocomplete): fix dropdown width calculation bug
Temporarily remove width constraints when measuring content to prevent
scrollWidth from being limited by narrow container. This fixes the issue
where dropdown width was incorrectly calculated as ~120px.

Also update test to match maxItems default value (100).
2026-03-07 23:23:26 +08:00
Will Miao
4fb07370dd fix(tests): add offset parameter to MockTagFTSIndex.search()
Add missing offset parameter to MockTagFTSIndex to support
pagination changes from commit a802a89.

- Update search() signature to include offset=0
- Implement pagination logic with offset/limit slicing
2026-03-07 23:10:00 +08:00
Will Miao
43f6bfab36 fix(autocomplete): strip file extensions from model names in search suggestions
Remove .safetensors/.ckpt/.pt/.bin extensions from model names in autocomplete
suggestions to improve UX and search relevance:

Frontend (web/comfyui/autocomplete.js):
- Add _getDisplayText() helper to strip extensions from model paths
- Update _matchItem() to match against filename without extension
- Update render() and createItemElement() to display clean names

Backend (py/services/base_model_service.py):
- Add _remove_model_extension() helper method
- Update _relative_path_matches_tokens() to ignore extensions in matching
- Update _relative_path_sort_key() to sort based on names without extensions

Tests (tests/services/test_relative_path_search.py):
- Add tests to verify 's' and 'safe' queries don't match all .safetensors files

Fixes issue where typing 's' would match all .safetensors files and cluttered
suggestions with redundant extension names.
2026-03-07 23:07:10 +08:00
Will Miao
a802a89ff9 feat(autocomplete): implement virtual scrolling and pagination
- Add virtual scrolling with configurable visible items (default: 15)
- Implement pagination with offset/limit for backend APIs
- Support loading more items on scroll
- Fix width calculation for suggestions dropdown
- Update backend services to support offset parameter

Files modified:
- web/comfyui/autocomplete.js (virtual scroll, pagination)
- py/services/base_model_service.py (offset support)
- py/services/custom_words_service.py (offset support)
- py/services/tag_fts_index.py (offset support)
- py/routes/handlers/model_handlers.py (offset param)
- py/routes/handlers/misc_handlers.py (offset param)
2026-03-07 22:17:26 +08:00
Will Miao
343dd91e4b feat(ui): improve clear button UX in autocomplete text widget
Move clear button from top-right to bottom-right to avoid

obscuring text content. Add hover visibility for cleaner UI.

Reserve bottom padding in textarea for button placement.
2026-03-07 21:09:59 +08:00
Will Miao
3756f88368 feat(autocomplete): improve multi-word tag search with query normalization
Implement search query variation generation to improve matching for multi-word tags:
- Generate multiple query forms: original, underscore (spaces->_), no-space, last token
- Execute up to 4 parallel queries with result merging and deduplication
- Add smart matching with symbol-insensitive comparison (blue hair matches blue_hair)
- Sort results with exact matches prioritized over partial matches

This allows users to type natural language queries like 'looking to the side' and
find tags like 'Looking_to_the_side' while maintaining backward compatibility
with continuous typing workflows.
2026-03-07 20:24:35 +08:00
Will Miao
acc625ead3 feat(recipes): add sync changes dropdown menu for recipe refresh
- Add syncChanges() function to recipeApi.js for quick refresh without cache rebuild
- Implement dropdown menu UI in recipes page with quick refresh and full rebuild options
- Add initDropdowns() method to RecipeManager for dropdown interaction handling
- Update AGENTS.md with more precise instruction about running sync_translation_keys.py
- Integrate sync changes functionality as default refresh behavior
2026-03-04 20:31:58 +08:00
Will Miao
f402505f97 i18n: complete TODO translations in locale files
- Add missing translations for modelTypes, recipe refresh, and sync notifications
- Translate for all supported languages (zh-CN, zh-TW, ja, ko, fr, de, es, ru, he)
- Run sync_translation_keys.py to ensure key consistency
2026-03-04 20:27:21 +08:00
Will Miao
4d8113464c perf(recipe_scanner): eliminate event loop blocking during cache rebuild
Refactor force_refresh path to use thread pool execution instead of blocking
the event loop shared with ComfyUI. Key changes:

- Fix 1: Route force_refresh through _initialize_recipe_cache_sync() in thread pool
- Fix 2: Add GIL release points (time.sleep(0)) every 100 files in sync loops
- Fix 3: Move RecipeCache.resort() to thread pool via run_in_executor
- Fix 4: Persist cache automatically after force_refresh
- Fix 5: Increase yield frequency in _enrich_cache_metadata (every recipe)

This eliminates the ~5 minute freeze when rebuilding 30K recipe cache.

Fixes performance issue where ComfyUI became unresponsive during recipe
scanning due to shared Python event loop blocking.
2026-03-04 15:10:46 +08:00
Will Miao
1ed503a6b5 docs: add lazy hash computation to v1.0.0 release notes 2026-03-04 07:41:19 +08:00
Will Miao
d67914e095 docs: update portable package download link to v1.0.0 2026-03-03 22:06:29 +08:00
Will Miao
2c810306fb feat: implement automated supporter recognition in README
- Add scripts/update_supporters.py to generate supporter list from JSON
- Set up GitHub Action to auto-update README.md on supporters.json change
- Update README.md with placeholders and personalized gratitude message
2026-03-03 21:52:08 +08:00
Will Miao
dd94c6b31a chore: add v1.0.0 release notes and update version in pyproject.toml 2026-03-03 21:19:50 +08:00
Will Miao
1a0edec712 feat: enhance supporters modal with auto-scrolling and visual improvements
- Add auto-scrolling functionality to supporters list with user interaction controls (pause on hover, manual scroll)
- Implement gradient overlays at top/bottom for credits-like appearance
- Style custom scrollbar with subtle hover effects for better UX
- Adjust padding and positioning to ensure all supporters remain visible during scroll
2026-03-03 21:18:12 +08:00
Will Miao
7ba9b998d3 fix(stats): resolve dashboard initialization race condition and test failure
- Refactor StatisticsManager to return promises from initializeVisualizations and initializeLists
- Update fetchAndRenderList to use the fetchData wrapper for consistent mocking
- Update statistics dashboard test to include mock data for paginated model-usage-list endpoint
2026-03-03 15:08:33 +08:00
Will Miao
8c5d5a8ca0 feat(stats): implement infinite scrolling and paginated model usage lists (fixes #812)
- Add get_model_usage_list API endpoint for paginated stats
- Replace static rendering with client-side infinite scroll logic
- Add scrollbars and max-height to model usage lists
2026-03-03 15:00:01 +08:00
Will Miao
672e4cff90 fix(move): reset manual folder selection when using default path (fixes #836) 2026-03-02 23:29:16 +08:00
Will Miao
c2716e3c39 fix(i18n): resolve missing translation keys and complete multi-language support
- Add missing keys 'common.cancel', 'common.confirm', and 'sidebar.dragDrop.noDragState' to en.json
- Synchronize all locale files using sync_translation_keys.py
- Complete translations for zh-CN, zh-TW, ja, ru, de, fr, es, ko, and he
- Implement sidebar drag-and-drop folder creation with visual feedback and input validation
- Optimize MoveManager to use resetAndReload for consistent UI state after moving models
- Fix recursive visibility check for root folder in MoveManager
2026-03-02 22:02:47 +08:00
Will Miao
b72cf7ba98 feat(showcase): optimize CivitAI media URLs for better performance
- Add CivitAI URL utility with optimization strategies for showcase and thumbnail modes
- Replace /original=true with /optimized=true for showcase videos to reduce bandwidth
- Remove redundant crossorigin and referrerpolicy attributes from video elements
- Use media type detection to apply appropriate optimization (image vs video)
- Integrate URL optimization into showcase rendering for improved loading times
2026-03-02 14:05:44 +08:00
Will Miao
bde11b153f fix(preview): resolve CORS error when setting CivitAI remote media as preview
- Add new endpoint POST /api/lm/{prefix}/set-preview-from-url to handle
  remote image downloads server-side, avoiding CORS issues
- Use rewrite_preview_url() to download optimized smaller images (450px width)
- Use Downloader service for reliable downloads with retry logic and proxy support
- Update frontend to call new endpoint instead of fetching images in browser

fixes #837
2026-03-02 13:21:18 +08:00
Will Miao
8b924b1551 feat: add draggable attribute to recipe card elements
- Set draggable=true on recipe card div elements to enable drag-and-drop functionality
- This allows users to drag recipe cards for reordering or other interactions
2026-03-02 10:28:36 +08:00
Will Miao
ce08935b1e fix(showcase): support middle-click and left-click to expand showcase
Fix showcase expansion to work with both left-click and middle-click (drag scroll).

Problem: The scroll-indicator click events were only bound when the carousel
was in expanded state. Initial collapsed state meant no click handlers were
attached, so clicking did nothing.

Solution:
- Extract scroll-indicator event binding into separate bindScrollIndicatorEvents()
- Call bindScrollIndicatorEvents() immediately when showcase loads, regardless
  of collapsed state
- Separate handlers for left-click (click event) and middle-click (mousedown
  event) to avoid double-triggering

Changes:
- Add bindScrollIndicatorEvents() function for early event binding
- Use click event for left mouse button (button 0)
- Use mousedown event for middle mouse button (button 1)
- Update loadExampleImages() to bind events immediately
- Update initShowcaseContent() to use the new function
2026-03-02 08:44:15 +08:00
Will Miao
24fcbeaf76 Skip performance tests by default
- Add 'performance' marker to pytest.ini
- Add pytestmark to test_cache_performance.py
- Use -m 'not performance' by default in addopts
- Allows manual execution with 'pytest -m performance'
2026-02-28 21:46:20 +08:00
Will Miao
c9e5ea42cb Fix null-safety issues and apply code formatting
Bug fixes:
- Add null guards for base_models_roots/embeddings_roots in backup cleanup
- Fix null-safety initialization of extra_unet_roots

Formatting:
- Apply consistent code style across Python files
- Fix line wrapping, quote consistency, and trailing commas
- Add type ignore comments for dynamic/platform-specific code
2026-02-28 21:38:41 +08:00
Will Miao
b005961ee5 feat(ui): improve changelog styling and spacing
- Remove left padding from changelog content container
- Add consistent padding to all changelog items
- Simplify latest changelog item styling by removing redundant padding
- Maintain visual distinction for latest items with background and border
2026-02-28 20:47:44 +08:00
Will Miao
ce03bbbc4e fix(frontend): defer LoadingManager DOM initialization to resolve i18n warning
Delay DOM creation in LoadingManager constructor to first use time,
ensuring window.i18n is ready before translate() is called.

This eliminates the 'i18n not available' console warning during
module initialization while maintaining correct translations
for cancel button and loading status text.
2026-02-28 20:30:16 +08:00
Will Miao
78b55d10ba refactor: move supporters loading to separate API endpoint
- Add SupportersHandler in misc_handlers.py to serve /api/lm/supporters
- Register new endpoint in misc_route_registrar.py
- Remove supporters from page load template context in model_handlers.py
- Create supportersService.js for frontend data fetching
- Update Header.js to fetch supporters when support modal opens
- Modify support_modal.html to use client-side rendering

This change improves page load performance by loading supporters data
on-demand instead of during initial page render.
2026-02-28 20:14:20 +08:00
Will Miao
77a2215e62 Fix lazy hash calculation for checkpoints in extra paths
- Allow empty sha256 when hash_status is 'pending' in cache entry validator
- Add on-demand hash calculation during bulk metadata refresh for checkpoints
  with pending hash status
- Add comprehensive tests for both fixes

Fixes issue where checkpoints in extra paths were not visible in UI and
not processed during bulk metadata refresh due to empty sha256.
2026-02-27 19:19:16 +08:00
pixelpaws
31901f1f0e Merge pull request #829 from willmiao/feature/lazy-hash-checkpoints
feat: lazy hash calculation for checkpoints
2026-02-27 11:02:39 +08:00
Will Miao
12a789ef96 fix(extra-folder-paths): fix extra folder paths support for checkpoint and unet roots
- Fix config.py: save and restore main paths when processing extra folder paths to prevent
  _prepare_checkpoint_paths from overwriting checkpoints_roots and unet_roots
- Fix lora_manager.py: apply library settings during initialization to load extra folder paths
  in ComfyUI plugin mode
- Fix checkpoint_routes.py: merge checkpoints/unet roots with extra paths in API endpoints
- Add logging for extra folder paths

Fixes issue where extra folder paths were not recognized for checkpoints and unet models.
2026-02-27 10:37:15 +08:00
Will Miao
d50bbe71c2 fix(extra-folder-paths): fix extra folder paths support for checkpoint and unet roots
- Fix config.py: save and restore main paths when processing extra folder paths to prevent
  _prepare_checkpoint_paths from overwriting checkpoints_roots and unet_roots
- Fix lora_manager.py: apply library settings during initialization to load extra folder paths
  in ComfyUI plugin mode
- Fix checkpoint_routes.py: merge checkpoints/unet roots with extra paths in API endpoints
- Add logging for extra folder paths

Fixes issue where extra folder paths were not recognized for checkpoints and unet models.
2026-02-27 10:27:29 +08:00
343 changed files with 55988 additions and 6061 deletions

View File

@@ -0,0 +1,69 @@
---
name: lora-manager-runtime-context
description: Inspect ComfyUI LoRA Manager runtime configuration and local diagnostic state. Use when debugging LoRA Manager issues that require locating or reading settings.json, active library paths, model metadata JSON sidecars, recipe metadata JSON files, example image folders, SQLite caches, symlink maps, download history, aria2 state, or other cache files under the LoRA Manager user config directory.
---
# LoRA Manager Runtime Context
## Core Rules
- Treat runtime state as local user data. Prefer read-only inspection unless the user explicitly asks for mutation.
- Never print secret-like settings values. Redact keys containing `key`, `token`, `secret`, `password`, `auth`, or `credential`, including `civitai_api_key`.
- Resolve paths from the runtime configuration before guessing. In this environment the settings file is normally `/home/miao/.config/ComfyUI-LoRA-Manager/settings.json`, but portable settings can override this through the repository `settings.json`.
- Use the active library when selecting per-library caches and paths. Read `active_library` from settings; fall back to `default` if missing.
- Normalize and expand `~` before comparing paths. Symlinks are common in this repo.
## Quick Start
Use the bundled helper for a safe first pass:
```bash
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py summary
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py caches
```
The script redacts sensitive settings, opens SQLite databases read-only, and reports inaccessible or locked databases as warnings.
For focused checks:
```bash
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py recipes
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py model --path /path/to/model.safetensors
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py sqlite --db /path/to/cache.sqlite --limit 3
```
## Runtime Path Rules
- Settings directory: use `py/utils/settings_paths.py`. Default platform path is `platformdirs.user_config_dir("ComfyUI-LoRA-Manager", appauthor=False)`.
- Settings file: `<settings_dir>/settings.json`.
- Cache root: `<settings_dir>/cache`.
- Canonical cache files:
- Model cache: `cache/model/<active_library>.sqlite`.
- Recipe cache: `cache/recipe/<active_library>.sqlite`.
- Model update cache: `cache/model_update/<active_library>.sqlite`.
- Recipe FTS: `cache/fts/recipe_fts.sqlite`.
- Tag FTS: `cache/fts/tag_fts.sqlite`.
- Symlink map: `cache/symlink/symlink_map.json`.
- Download history: `cache/download_history/downloaded_versions.sqlite`.
- aria2 state: `cache/aria2/downloads.json`.
- Legacy cache locations may exist; prefer canonical paths unless diagnosing migrations.
## Data Location Rules
- Model roots come from `settings.folder_paths` and the active library payload under `settings.libraries[active_library]`.
- Model metadata JSON sidecars live next to the model file as `<model basename>.metadata.json`.
- Recipes root is `settings.recipes_path` when it is a non-empty string. If empty, use the first configured LoRA root plus `/recipes`.
- Recipe JSON files are named `*.recipe.json` under the recipes root and may be nested in folders.
- Example image root is `settings.example_images_path`.
- If multiple libraries are configured, example images are stored under `<example_images_path>/<sanitized_library>/<sha256>/`; otherwise they are under `<example_images_path>/<sha256>/`.
## Useful Cache Tables
- Model cache: `models`, `model_tags`, `hash_index`, `excluded_models`.
- Recipe cache: `recipes`, `cache_metadata`.
- Model update cache: `model_update_status`, `model_update_versions`.
- Tag FTS cache: `tags`, `fts_metadata`, plus FTS internal tables.
- Recipe FTS cache: `recipe_rowid`, `fts_metadata`, plus FTS internal tables.
- Download history: `downloaded_model_versions`.
Prefer querying only counts, schema, and a few sample rows unless the user asks for full output.

View File

@@ -0,0 +1,4 @@
interface:
display_name: "LoRA Manager Runtime Context"
short_description: "Inspect LoRA Manager runtime state"
default_prompt: "Use $lora-manager-runtime-context to inspect LoRA Manager settings, metadata paths, and caches for debugging."

View File

@@ -0,0 +1,381 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import json
import os
import re
import shutil
import sqlite3
import sys
import tempfile
from pathlib import Path
from typing import Any
SECRET_PATTERN = re.compile(r"(key|token|secret|password|auth|credential)", re.IGNORECASE)
APP_NAME = "ComfyUI-LoRA-Manager"
CACHE_SQLITE = {
"model": ("model", "{library}.sqlite"),
"recipe": ("recipe", "{library}.sqlite"),
"model_update": ("model_update", "{library}.sqlite"),
"recipe_fts": ("fts", "recipe_fts.sqlite"),
"tag_fts": ("fts", "tag_fts.sqlite"),
"download_history": ("download_history", "downloaded_versions.sqlite"),
}
CACHE_JSON = {
"symlink": ("symlink", "symlink_map.json"),
"aria2": ("aria2", "downloads.json"),
}
def main() -> int:
parser = argparse.ArgumentParser(description="Inspect LoRA Manager runtime state read-only.")
subparsers = parser.add_subparsers(dest="command", required=True)
subparsers.add_parser("summary", help="Print redacted settings and resolved paths.")
subparsers.add_parser("caches", help="Print cache paths and SQLite table summaries.")
subparsers.add_parser("recipes", help="Print resolved recipes root and recipe JSON count.")
model_parser = subparsers.add_parser("model", help="Inspect a model metadata sidecar path.")
model_parser.add_argument("--path", required=True, help="Path to a model file or metadata JSON file.")
sqlite_parser = subparsers.add_parser("sqlite", help="Inspect a SQLite database read-only.")
sqlite_parser.add_argument("--db", required=True, help="Path to the SQLite database.")
sqlite_parser.add_argument("--limit", type=int, default=3, help="Rows to sample from each user table.")
args = parser.parse_args()
context = build_context()
if args.command == "summary":
print_json(summary_payload(context))
elif args.command == "caches":
print_json(caches_payload(context))
elif args.command == "recipes":
print_json(recipes_payload(context))
elif args.command == "model":
print_json(model_payload(args.path))
elif args.command == "sqlite":
print_json(sqlite_payload(Path(args.db).expanduser(), args.limit))
return 0
def build_context() -> dict[str, Any]:
settings_path = resolve_settings_path()
settings = load_json(settings_path)
settings_dir = settings_path.parent
active_library = settings.get("active_library") or "default"
safe_library = sanitize_library_name(str(active_library))
cache_root = settings_dir / "cache"
return {
"settings_path": str(settings_path),
"settings_dir": str(settings_dir),
"settings": settings,
"active_library": active_library,
"safe_library": safe_library,
"cache_root": str(cache_root),
"cache_paths": resolve_cache_paths(cache_root, safe_library),
}
def resolve_settings_path() -> Path:
repo_root = find_repo_root()
portable = repo_root / "settings.json"
if portable.exists():
payload = load_json(portable)
if isinstance(payload, dict) and payload.get("use_portable_settings") is True:
return portable
config_home = os.environ.get("XDG_CONFIG_HOME")
if config_home:
return Path(config_home).expanduser() / APP_NAME / "settings.json"
return Path.home() / ".config" / APP_NAME / "settings.json"
def find_repo_root() -> Path:
current = Path(__file__).resolve()
for parent in current.parents:
if (parent / "py").is_dir() and (parent / "standalone.py").exists():
return parent
return Path.cwd()
def load_json(path: Path) -> dict[str, Any]:
try:
with path.open("r", encoding="utf-8") as handle:
payload = json.load(handle)
except FileNotFoundError:
return {}
except json.JSONDecodeError as exc:
return {"_error": f"invalid JSON: {exc}"}
except OSError as exc:
return {"_error": f"unreadable: {exc}"}
return payload if isinstance(payload, dict) else {"_error": "JSON root is not an object"}
def resolve_cache_paths(cache_root: Path, library: str) -> dict[str, str]:
paths: dict[str, str] = {}
for name, (subdir, filename) in CACHE_SQLITE.items():
paths[name] = str(cache_root / subdir / filename.format(library=library))
for name, (subdir, filename) in CACHE_JSON.items():
paths[name] = str(cache_root / subdir / filename)
return paths
def summary_payload(context: dict[str, Any]) -> dict[str, Any]:
settings = context["settings"]
return {
"settings_path": context["settings_path"],
"settings_dir": context["settings_dir"],
"active_library": context["active_library"],
"settings": redact(settings),
"model_roots": model_roots(settings, context["active_library"]),
"recipes_root": str(resolve_recipes_root(settings, context["active_library"]) or ""),
"example_images": example_images_payload(settings, context["active_library"]),
"cache_root": context["cache_root"],
"cache_paths": context["cache_paths"],
}
def caches_payload(context: dict[str, Any]) -> dict[str, Any]:
caches: dict[str, Any] = {}
for name, path_string in context["cache_paths"].items():
path = Path(path_string)
item: dict[str, Any] = {
"path": str(path),
"exists": path.exists(),
"size": path.stat().st_size if path.exists() else None,
}
if path.suffix == ".sqlite":
item["sqlite"] = sqlite_payload(path, limit=0)
elif path.suffix == ".json":
item["json"] = json_file_summary(path)
caches[name] = item
return {"active_library": context["active_library"], "caches": caches}
def recipes_payload(context: dict[str, Any]) -> dict[str, Any]:
root = resolve_recipes_root(context["settings"], context["active_library"])
files: list[str] = []
if root and root.exists():
files = [str(path) for path in sorted(root.rglob("*.recipe.json"))[:20]]
return {
"recipes_root": str(root or ""),
"exists": bool(root and root.exists()),
"recipe_json_count": count_recipe_files(root),
"sample_recipe_json": files,
"recipe_cache": context["cache_paths"].get("recipe"),
}
def model_payload(raw_path: str) -> dict[str, Any]:
path = Path(raw_path).expanduser()
metadata_path = path if path.name.endswith(".metadata.json") else path.with_suffix(".metadata.json")
payload = {
"input_path": str(path),
"metadata_path": str(metadata_path),
"model_exists": path.exists(),
"metadata_exists": metadata_path.exists(),
}
if metadata_path.exists():
data = load_json(metadata_path)
payload["metadata_summary"] = redact(summarize_value(data))
return payload
def sqlite_payload(path: Path, limit: int = 3, allow_copy: bool = True) -> dict[str, Any]:
result: dict[str, Any] = {"path": str(path), "exists": path.exists(), "tables": {}}
if not path.exists():
return result
try:
conn = connect_sqlite_readonly(path)
except sqlite3.Error as exc:
result["error"] = str(exc)
return result
try:
table_rows = conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
).fetchall()
for table_row in table_rows:
table = table_row["name"]
columns = [
row["name"]
for row in conn.execute(f"PRAGMA table_info({quote_identifier(table)})").fetchall()
]
table_info: dict[str, Any] = {"columns": columns}
try:
table_info["count"] = conn.execute(
f"SELECT COUNT(*) FROM {quote_identifier(table)}"
).fetchone()[0]
except sqlite3.Error as exc:
table_info["count_error"] = str(exc)
if limit > 0 and columns and not is_internal_sqlite_table(table):
try:
rows = conn.execute(
f"SELECT * FROM {quote_identifier(table)} LIMIT ?", (limit,)
).fetchall()
table_info["sample"] = [redact(dict(row)) for row in rows]
except sqlite3.Error as exc:
table_info["sample_error"] = str(exc)
result["tables"][table] = table_info
except sqlite3.Error as exc:
fallback = sqlite_copy_payload(path, limit, str(exc)) if allow_copy else None
if fallback is not None:
result.update(fallback)
else:
result["error"] = str(exc)
finally:
conn.close()
return result
def connect_sqlite_readonly(path: Path) -> sqlite3.Connection:
errors: list[str] = []
for query in ("mode=ro", "mode=ro&immutable=1"):
try:
conn = sqlite3.connect(f"file:{path}?{query}", uri=True)
conn.row_factory = sqlite3.Row
return conn
except sqlite3.Error as exc:
errors.append(f"{query}: {exc}")
raise sqlite3.OperationalError("; ".join(errors))
def sqlite_copy_payload(path: Path, limit: int, original_error: str) -> dict[str, Any] | None:
try:
with tempfile.TemporaryDirectory(prefix="lm-cache-inspect-") as temp_dir:
copy_path = Path(temp_dir) / path.name
shutil.copy2(path, copy_path)
payload = sqlite_payload(copy_path, limit, allow_copy=False)
payload["path"] = str(path)
payload["inspected_copy"] = True
payload["original_error"] = original_error
return payload
except Exception:
return None
def json_file_summary(path: Path) -> dict[str, Any]:
if not path.exists():
return {"exists": False}
data = load_json(path)
return {"exists": True, "summary": redact(summarize_value(data))}
def model_roots(settings: dict[str, Any], active_library: str) -> dict[str, list[str]]:
roots: dict[str, list[str]] = {}
sources = [settings]
library = settings.get("libraries", {}).get(active_library)
if isinstance(library, dict):
sources.insert(0, library)
for source in sources:
folder_paths = source.get("folder_paths")
if isinstance(folder_paths, dict):
for key, value in folder_paths.items():
roots.setdefault(key, []).extend(normalize_path_list(value))
for default_key, folder_key in (
("default_lora_root", "loras"),
("default_checkpoint_root", "checkpoints"),
("default_embedding_root", "embeddings"),
("default_unet_root", "unet"),
):
value = settings.get(default_key)
if isinstance(value, str) and value:
roots.setdefault(folder_key, []).append(expand_path(value))
return {key: dedupe(values) for key, values in roots.items()}
def resolve_recipes_root(settings: dict[str, Any], active_library: str) -> Path | None:
recipes_path = settings.get("recipes_path")
library = settings.get("libraries", {}).get(active_library)
if isinstance(library, dict) and isinstance(library.get("recipes_path"), str):
recipes_path = library["recipes_path"] or recipes_path
if isinstance(recipes_path, str) and recipes_path.strip():
return Path(expand_path(recipes_path.strip()))
lora_roots = model_roots(settings, active_library).get("loras") or []
return Path(lora_roots[0]) / "recipes" if lora_roots else None
def example_images_payload(settings: dict[str, Any], active_library: str) -> dict[str, Any]:
root = settings.get("example_images_path") or ""
libraries = settings.get("libraries")
library_count = len(libraries) if isinstance(libraries, dict) else 0
scoped = library_count > 1
root_path = Path(expand_path(root)) if isinstance(root, str) and root else None
library_root = root_path / sanitize_library_name(active_library) if root_path and scoped else root_path
return {
"root": str(root_path or ""),
"uses_library_scoped_folders": scoped,
"library_root": str(library_root or ""),
}
def count_recipe_files(root: Path | None) -> int:
if not root or not root.exists():
return 0
return sum(1 for _ in root.rglob("*.recipe.json"))
def normalize_path_list(value: Any) -> list[str]:
if isinstance(value, str):
return [expand_path(value)] if value else []
if isinstance(value, list):
return [expand_path(item) for item in value if isinstance(item, str) and item]
return []
def expand_path(value: str) -> str:
return str(Path(value).expanduser().resolve(strict=False))
def sanitize_library_name(name: str) -> str:
safe = re.sub(r"[^A-Za-z0-9_.-]", "_", name or "default")
return safe or "default"
def dedupe(values: list[str]) -> list[str]:
seen: set[str] = set()
result: list[str] = []
for value in values:
if value not in seen:
result.append(value)
seen.add(value)
return result
def redact(value: Any, key: str = "") -> Any:
if key and SECRET_PATTERN.search(key):
return "<redacted>"
if isinstance(value, dict):
return {str(k): redact(v, str(k)) for k, v in value.items()}
if isinstance(value, list):
return [redact(item) for item in value]
return value
def summarize_value(value: Any) -> Any:
if isinstance(value, dict):
return {key: summarize_value(item) for key, item in value.items()}
if isinstance(value, list):
return {
"type": "array",
"length": len(value),
"first": summarize_value(value[0]) if value else None,
}
return value
def quote_identifier(identifier: str) -> str:
return '"' + identifier.replace('"', '""') + '"'
def is_internal_sqlite_table(table: str) -> bool:
return table.startswith("sqlite_") or table.endswith(("_data", "_idx", "_docsize", "_config", "_content"))
def print_json(payload: Any) -> None:
json.dump(payload, sys.stdout, indent=2, ensure_ascii=False)
sys.stdout.write("\n")
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,153 @@
# Recipe Batch Import Feature Design
## Overview
Enable users to import multiple images as recipes in a single operation, rather than processing them individually. This feature addresses the need for efficient bulk recipe creation from existing image collections.
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Frontend │
├─────────────────────────────────────────────────────────────────┤
│ BatchImportManager.js │
│ ├── InputCollector (收集URL列表/目录路径) │
│ ├── ConcurrencyController (自适应并发控制) │
│ ├── ProgressTracker (进度追踪) │
│ └── ResultAggregator (结果汇总) │
├─────────────────────────────────────────────────────────────────┤
│ batch_import_modal.html │
│ └── 批量导入UI组件 │
├─────────────────────────────────────────────────────────────────┤
│ batch_import_progress.css │
│ └── 进度显示样式 │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Backend │
├─────────────────────────────────────────────────────────────────┤
│ py/routes/handlers/recipe_handlers.py │
│ ├── start_batch_import() - 启动批量导入 │
│ ├── get_batch_import_progress() - 查询进度 │
│ └── cancel_batch_import() - 取消导入 │
├─────────────────────────────────────────────────────────────────┤
│ py/services/batch_import_service.py │
│ ├── 自适应并发执行 │
│ ├── 结果汇总 │
│ └── WebSocket进度广播 │
└─────────────────────────────────────────────────────────────────┘
```
## API Endpoints
| 端点 | 方法 | 说明 |
|------|------|------|
| `/api/lm/recipes/batch-import/start` | POST | 启动批量导入,返回 operation_id |
| `/api/lm/recipes/batch-import/progress` | GET | 查询进度状态 |
| `/api/lm/recipes/batch-import/cancel` | POST | 取消导入 |
## Backend Implementation Details
### BatchImportService
Location: `py/services/batch_import_service.py`
Key classes:
- `BatchImportItem`: Dataclass for individual import item
- `BatchImportProgress`: Dataclass for tracking progress
- `BatchImportService`: Main service class
Features:
- Adaptive concurrency control (adjusts based on success/failure rate)
- WebSocket progress broadcasting
- Graceful error handling (individual failures don't stop the batch)
- Result aggregation
### WebSocket Message Format
```json
{
"type": "batch_import_progress",
"operation_id": "xxx",
"total": 50,
"completed": 23,
"success": 21,
"failed": 2,
"skipped": 0,
"current_item": "image_024.png",
"status": "running"
}
```
### Input Types
1. **URL List**: Array of URLs (http/https)
2. **Local Paths**: Array of local file paths
3. **Directory**: Path to directory with optional recursive flag
### Error Handling
- Invalid URLs/paths: Skip and record error
- Download failures: Record error, continue
- Metadata extraction failures: Mark as "no metadata"
- Duplicate detection: Option to skip duplicates
## Frontend Implementation Details (TODO)
### UI Components
1. **BatchImportModal**: Main modal with tabs for URLs/Directory input
2. **ProgressDisplay**: Real-time progress bar and status
3. **ResultsSummary**: Final results with success/failure breakdown
### Adaptive Concurrency Controller
```javascript
class AdaptiveConcurrencyController {
constructor(options = {}) {
this.minConcurrency = options.minConcurrency || 1;
this.maxConcurrency = options.maxConcurrency || 5;
this.currentConcurrency = options.initialConcurrency || 3;
}
adjustConcurrency(taskDuration, success) {
if (success && taskDuration < 1000 && this.currentConcurrency < this.maxConcurrency) {
this.currentConcurrency = Math.min(this.currentConcurrency + 1, this.maxConcurrency);
}
if (!success || taskDuration > 10000) {
this.currentConcurrency = Math.max(this.currentConcurrency - 1, this.minConcurrency);
}
return this.currentConcurrency;
}
}
```
## File Structure
```
Backend (implemented):
├── py/services/batch_import_service.py # 后端服务
├── py/routes/handlers/batch_import_handler.py # API处理器 (added to recipe_handlers.py)
├── tests/services/test_batch_import_service.py # 单元测试
└── tests/routes/test_batch_import_routes.py # API集成测试
Frontend (TODO):
├── static/js/managers/BatchImportManager.js # 主管理器
├── static/js/managers/batch/ # 子模块
│ ├── ConcurrencyController.js # 并发控制
│ ├── ProgressTracker.js # 进度追踪
│ └── ResultAggregator.js # 结果汇总
├── static/css/components/batch-import-modal.css # 样式
└── templates/components/batch_import_modal.html # Modal模板
```
## Implementation Status
- [x] Backend BatchImportService
- [x] Backend API handlers
- [x] WebSocket progress broadcasting
- [x] Unit tests
- [x] Integration tests
- [ ] Frontend BatchImportManager
- [ ] Frontend UI components
- [ ] E2E tests

31
.github/workflows/update-supporters.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Update Supporters in README
on:
push:
paths:
- 'data/supporters.json'
branches:
- main
workflow_dispatch: # Allow manual trigger
jobs:
update-readme:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Update README
run: python scripts/update_supporters.py
- name: Commit and push changes
uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "docs: auto-update supporters list in README"
file_pattern: "README.md"

2
.gitignore vendored
View File

@@ -14,6 +14,8 @@ model_cache/
# agent
.opencode/
.claude/
.codex
# Vue widgets development cache (but keep build output)
vue-widgets/node_modules/

464
.specs/metadata.schema.json Normal file
View File

@@ -0,0 +1,464 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://github.com/willmiao/ComfyUI-Lora-Manager/.specs/metadata.schema.json",
"title": "ComfyUI LoRa Manager Model Metadata",
"description": "Schema for .metadata.json sidecar files used by ComfyUI LoRa Manager",
"type": "object",
"oneOf": [
{
"title": "LoRA Model Metadata",
"properties": {
"file_name": {
"type": "string",
"description": "Filename without extension"
},
"model_name": {
"type": "string",
"description": "Display name of the model"
},
"file_path": {
"type": "string",
"description": "Full absolute path to the model file"
},
"size": {
"type": "integer",
"minimum": 0,
"description": "File size in bytes at time of import/download"
},
"modified": {
"type": "number",
"description": "Unix timestamp when model was imported/added (Date Added)"
},
"sha256": {
"type": "string",
"pattern": "^[a-f0-9]{64}$",
"description": "SHA256 hash of the model file (lowercase)"
},
"base_model": {
"type": "string",
"description": "Base model type (SD1.5, SD2.1, SDXL, SD3, Flux, Unknown, etc.)"
},
"preview_url": {
"type": "string",
"description": "Path to preview image file"
},
"preview_nsfw_level": {
"type": "integer",
"minimum": 0,
"default": 0,
"description": "NSFW level using bitmask values: 0 (none), 1 (PG), 2 (PG13), 4 (R), 8 (X), 16 (XXX), 32 (Blocked)"
},
"notes": {
"type": "string",
"default": "",
"description": "User-defined notes"
},
"from_civitai": {
"type": "boolean",
"default": true,
"description": "Whether the model originated from Civitai"
},
"civitai": {
"$ref": "#/definitions/civitaiObject"
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"default": [],
"description": "Model tags"
},
"modelDescription": {
"type": "string",
"default": "",
"description": "Full model description"
},
"civitai_deleted": {
"type": "boolean",
"default": false,
"description": "Whether the model was deleted from Civitai"
},
"favorite": {
"type": "boolean",
"default": false,
"description": "Whether the model is marked as favorite"
},
"exclude": {
"type": "boolean",
"default": false,
"description": "Whether to exclude from cache/scanning"
},
"db_checked": {
"type": "boolean",
"default": false,
"description": "Whether checked against archive database"
},
"skip_metadata_refresh": {
"type": "boolean",
"default": false,
"description": "Skip this model during bulk metadata refresh"
},
"metadata_source": {
"type": ["string", "null"],
"enum": ["civitai_api", "civarchive", "archive_db", null],
"default": null,
"description": "Last provider that supplied metadata"
},
"last_checked_at": {
"type": "number",
"default": 0,
"description": "Unix timestamp of last metadata check"
},
"hash_status": {
"type": "string",
"enum": ["pending", "calculating", "completed", "failed"],
"default": "completed",
"description": "Hash calculation status"
},
"usage_tips": {
"type": "string",
"default": "{}",
"description": "JSON string containing recommended usage parameters (LoRA only)"
}
},
"required": [
"file_name",
"model_name",
"file_path",
"size",
"modified",
"sha256",
"base_model"
],
"additionalProperties": true
},
{
"title": "Checkpoint Model Metadata",
"properties": {
"file_name": {
"type": "string"
},
"model_name": {
"type": "string"
},
"file_path": {
"type": "string"
},
"size": {
"type": "integer",
"minimum": 0
},
"modified": {
"type": "number"
},
"sha256": {
"type": "string",
"pattern": "^[a-f0-9]{64}$"
},
"base_model": {
"type": "string"
},
"preview_url": {
"type": "string"
},
"preview_nsfw_level": {
"type": "integer",
"minimum": 0,
"maximum": 3,
"default": 0
},
"notes": {
"type": "string",
"default": ""
},
"from_civitai": {
"type": "boolean",
"default": true
},
"civitai": {
"$ref": "#/definitions/civitaiObject"
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"default": []
},
"modelDescription": {
"type": "string",
"default": ""
},
"civitai_deleted": {
"type": "boolean",
"default": false
},
"favorite": {
"type": "boolean",
"default": false
},
"exclude": {
"type": "boolean",
"default": false
},
"db_checked": {
"type": "boolean",
"default": false
},
"skip_metadata_refresh": {
"type": "boolean",
"default": false
},
"metadata_source": {
"type": ["string", "null"],
"enum": ["civitai_api", "civarchive", "archive_db", null],
"default": null
},
"last_checked_at": {
"type": "number",
"default": 0
},
"hash_status": {
"type": "string",
"enum": ["pending", "calculating", "completed", "failed"],
"default": "completed"
},
"sub_type": {
"type": "string",
"default": "checkpoint",
"description": "Model sub-type (checkpoint, diffusion_model, etc.)"
}
},
"required": [
"file_name",
"model_name",
"file_path",
"size",
"modified",
"sha256",
"base_model"
],
"additionalProperties": true
},
{
"title": "Embedding Model Metadata",
"properties": {
"file_name": {
"type": "string"
},
"model_name": {
"type": "string"
},
"file_path": {
"type": "string"
},
"size": {
"type": "integer",
"minimum": 0
},
"modified": {
"type": "number"
},
"sha256": {
"type": "string",
"pattern": "^[a-f0-9]{64}$"
},
"base_model": {
"type": "string"
},
"preview_url": {
"type": "string"
},
"preview_nsfw_level": {
"type": "integer",
"minimum": 0,
"maximum": 3,
"default": 0
},
"notes": {
"type": "string",
"default": ""
},
"from_civitai": {
"type": "boolean",
"default": true
},
"civitai": {
"$ref": "#/definitions/civitaiObject"
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"default": []
},
"modelDescription": {
"type": "string",
"default": ""
},
"civitai_deleted": {
"type": "boolean",
"default": false
},
"favorite": {
"type": "boolean",
"default": false
},
"exclude": {
"type": "boolean",
"default": false
},
"db_checked": {
"type": "boolean",
"default": false
},
"skip_metadata_refresh": {
"type": "boolean",
"default": false
},
"metadata_source": {
"type": ["string", "null"],
"enum": ["civitai_api", "civarchive", "archive_db", null],
"default": null
},
"last_checked_at": {
"type": "number",
"default": 0
},
"hash_status": {
"type": "string",
"enum": ["pending", "calculating", "completed", "failed"],
"default": "completed"
},
"sub_type": {
"type": "string",
"default": "embedding",
"description": "Model sub-type"
}
},
"required": [
"file_name",
"model_name",
"file_path",
"size",
"modified",
"sha256",
"base_model"
],
"additionalProperties": true
}
],
"definitions": {
"civitaiObject": {
"type": "object",
"default": {},
"description": "Civitai/CivArchive API data and user-defined fields",
"properties": {
"id": {
"type": "integer",
"description": "Version ID from Civitai"
},
"modelId": {
"type": "integer",
"description": "Model ID from Civitai"
},
"name": {
"type": "string",
"description": "Version name"
},
"description": {
"type": "string",
"description": "Version description"
},
"baseModel": {
"type": "string",
"description": "Base model type from Civitai"
},
"type": {
"type": "string",
"description": "Model type (checkpoint, embedding, etc.)"
},
"trainedWords": {
"type": "array",
"items": {
"type": "string"
},
"description": "Trigger words for the model (from API or user-defined)"
},
"customImages": {
"type": "array",
"items": {
"type": "object"
},
"description": "Custom example images added by user"
},
"model": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"description": {
"type": "string"
},
"tags": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"files": {
"type": "array",
"items": {
"type": "object"
}
},
"images": {
"type": "array",
"items": {
"type": "object"
}
},
"creator": {
"type": "object"
}
},
"additionalProperties": true
},
"usageTips": {
"type": "object",
"description": "Structure for usage_tips JSON string (LoRA models)",
"properties": {
"strength_min": {
"type": "number",
"description": "Minimum recommended model strength"
},
"strength_max": {
"type": "number",
"description": "Maximum recommended model strength"
},
"strength_range": {
"type": "string",
"description": "Human-readable strength range"
},
"strength": {
"type": "number",
"description": "Single recommended strength value"
},
"clip_strength": {
"type": "number",
"description": "Recommended CLIP/embedding strength"
},
"clip_skip": {
"type": "integer",
"description": "Recommended CLIP skip value"
}
},
"additionalProperties": true
}
}
}

View File

@@ -135,9 +135,16 @@ npm run test:coverage # Generate coverage report
- ALWAYS use English for comments (per copilot-instructions.md)
- Dual mode: ComfyUI plugin (folder_paths) vs standalone (settings.json)
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
- Run `python scripts/sync_translation_keys.py` after UI string updates
- Run `python scripts/sync_translation_keys.py` after adding UI strings to `locales/en.json`
- Symlinks require normalized paths
## Git / Commit Messages
- Follow the style of recent repository commits when writing commit messages
- Prefer the repo's existing `feat(...)`, `fix(...)`, `chore:` style where applicable
- If the user has provided a GitHub issue link or issue ID for the task, mention that issue in the commit message, for example `(#871)`
- When unrelated local changes exist, stage and commit only the files relevant to the requested task
## Frontend UI Architecture
### 1. Standalone Web UI

122
README.md

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +1,13 @@
try: # pragma: no cover - import fallback for pytest collection
from .py.lora_manager import LoraManager
from .py.nodes.lora_loader import LoraLoaderLM, LoraTextLoaderLM
from .py.nodes.checkpoint_loader import CheckpointLoaderLM
from .py.nodes.unet_loader import UNETLoaderLM
from .py.nodes.trigger_word_toggle import TriggerWordToggleLM
from .py.nodes.prompt import PromptLM
from .py.nodes.text import TextLM
from .py.nodes.lora_stacker import LoraStackerLM
from .py.nodes.lora_stack_combiner import LoraStackCombinerLM
from .py.nodes.save_image import SaveImageLM
from .py.nodes.debug_metadata import DebugMetadataLM
from .py.nodes.wanvideo_lora_select import WanVideoLoraSelectLM
@@ -27,16 +30,19 @@ except (
PromptLM = importlib.import_module("py.nodes.prompt").PromptLM
TextLM = importlib.import_module("py.nodes.text").TextLM
LoraManager = importlib.import_module("py.lora_manager").LoraManager
LoraLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraLoaderLM
LoraTextLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraTextLoaderLM
LoraLoaderLM = importlib.import_module("py.nodes.lora_loader").LoraLoaderLM
LoraTextLoaderLM = importlib.import_module("py.nodes.lora_loader").LoraTextLoaderLM
CheckpointLoaderLM = importlib.import_module(
"py.nodes.checkpoint_loader"
).CheckpointLoaderLM
UNETLoaderLM = importlib.import_module("py.nodes.unet_loader").UNETLoaderLM
TriggerWordToggleLM = importlib.import_module(
"py.nodes.trigger_word_toggle"
).TriggerWordToggleLM
LoraStackerLM = importlib.import_module("py.nodes.lora_stacker").LoraStackerLM
LoraStackCombinerLM = importlib.import_module(
"py.nodes.lora_stack_combiner"
).LoraStackCombinerLM
SaveImageLM = importlib.import_module("py.nodes.save_image").SaveImageLM
DebugMetadataLM = importlib.import_module("py.nodes.debug_metadata").DebugMetadataLM
WanVideoLoraSelectLM = importlib.import_module(
@@ -49,9 +55,7 @@ except (
LoraRandomizerLM = importlib.import_module(
"py.nodes.lora_randomizer"
).LoraRandomizerLM
LoraCyclerLM = importlib.import_module(
"py.nodes.lora_cycler"
).LoraCyclerLM
LoraCyclerLM = importlib.import_module("py.nodes.lora_cycler").LoraCyclerLM
init_metadata_collector = importlib.import_module("py.metadata_collector").init
NODE_CLASS_MAPPINGS = {
@@ -59,8 +63,11 @@ NODE_CLASS_MAPPINGS = {
TextLM.NAME: TextLM,
LoraLoaderLM.NAME: LoraLoaderLM,
LoraTextLoaderLM.NAME: LoraTextLoaderLM,
CheckpointLoaderLM.NAME: CheckpointLoaderLM,
UNETLoaderLM.NAME: UNETLoaderLM,
TriggerWordToggleLM.NAME: TriggerWordToggleLM,
LoraStackerLM.NAME: LoraStackerLM,
LoraStackCombinerLM.NAME: LoraStackCombinerLM,
SaveImageLM.NAME: SaveImageLM,
DebugMetadataLM.NAME: DebugMetadataLM,
WanVideoLoraSelectLM.NAME: WanVideoLoraSelectLM,

673
data/supporters.json Normal file
View File

@@ -0,0 +1,673 @@
{
"specialThanks": [
"dispenser",
"EbonEagle",
"DanielMagPizza",
"Scott R"
],
"allSupporters": [
"Insomnia Art Designs",
"megakirbs",
"Brennok",
"2018cfh",
"W+K+White",
"wackop",
"Takkan",
"Carl G.",
"$MetaSamsara",
"itismyelement",
"onesecondinosaur",
"stone9k",
"Rosenthal",
"Francisco Tatis",
"Andrew Wilson",
"Greybush",
"Gooohokrbe",
"Ricky Carter",
"JongWon Han",
"OldBones",
"VantAI",
"runte3221",
"FreelancerZ",
"Edgar Tejeda",
"Liam MacDougal",
"Fraser Cross",
"Polymorphic Indeterminate",
"Birdy",
"Marc Whiffen",
"Jorge Hussni",
"Kiba",
"Skalabananen",
"Reno Lam",
"sig",
"Christian Byrne",
"DM",
"Sen314",
"Estragon",
"J\\B/ 8r0wns0n",
"Snaggwort",
"Arlecchino Shion",
"Charles Blakemore",
"Rob Williams",
"ClockDaemon",
"KD",
"Omnidex",
"Tyler Trebuchon",
"Release Cabrakan",
"Tobi_Swagg",
"SG",
"carozzz",
"James Dooley",
"zenbound",
"Buzzard",
"jmack",
"Mark Corneglio",
"SarcasticHashtag",
"Cosmosis",
"iamresist",
"RedrockVP",
"Wolffen",
"FloPro4Sho",
"James Todd",
"Steven Pfeiffer",
"Tim",
"Lisster",
"Michael Wong",
"Illrigger",
"Tom Corrigan",
"JackieWang",
"fnkylove",
"Julian V",
"Steven Owens",
"Yushio",
"Vik71it",
"Echo",
"Lilleman",
"Robert Stacey",
"PM",
"Todd Keck",
"Mozzel",
"Gingko Biloba",
"Sterilized",
"BadassArabianMofo",
"Pascal Dahle",
"quarz",
"Greg",
"Penfore",
"JSST",
"esthe",
"lmsupporter",
"IamAyam",
"wfpearl",
"Baekdoosixt",
"Jonathan Ross",
"Jack B Nimble",
"Nazono_hito",
"Melville Parrish",
"daniel dove",
"Lustre",
"JW Sin",
"contrite831",
"Alex",
"bh",
"confiscated Zyra",
"Marlon Daniels",
"Starkselle",
"Aaron Bleuer",
"LacesOut!",
"greebles",
"Adam Shaw",
"Tee Gee",
"Anthony Rizzo",
"tarek helmi",
"M Postkasse",
"ASLPro3D",
"Jacob Hoehler",
"FinalyFree",
"Weasyl",
"Timmy",
"Johnny",
"Cory Paza",
"Tak",
"Gonzalo Andre Allendes Lopez",
"Zach Gonser",
"Big Red",
"whudunit",
"Luc Job",
"dl0901dm",
"Philip Hempel",
"corde",
"Nick Walker",
"lh qwe",
"Bishoujoker",
"conner",
"aai",
"Briton Heilbrun",
"Tori",
"wildnut",
"Princess Bright Eyes",
"AbstractAss",
"Felipe dos Santos",
"ViperC",
"jean jahren",
"Aleksander Wujczyk",
"AM Kuro",
"Markus",
"S Sang",
"Karl P.",
"Akira_HentAI",
"MagnaInsomnia",
"Gordon Cole",
"yuxz69",
"Douglas Gaspar",
"AlexDuKaNa",
"George",
"andrew.tappan",
"dw",
"N/A",
"The Spawn",
"Phil",
"graysock",
"Greenmoustache",
"zounic",
"fancypants",
"Digital",
"JaxMax",
"takyamtom",
"奚明 刘",
"Jwk0205",
"Bro Xie",
"준희 김",
"batblue",
"carey6409",
"Olive",
"太郎 ゲーム",
"Some Guy Named Barry",
"Max Marklund",
"Tomohiro Baba",
"David Ortega",
"AELOX",
"Nicfit23",
"Noora",
"wamekukyouzin",
"drum matthieu",
"Dogmaster",
"Matt Wenzel",
"Mattssn",
"Lex Song",
"John Saveas",
"Christopher Michel",
"Serge Bekenkamp",
"Jimmy Ledbetter",
"LeoZero",
"Antonio Pontes",
"ApathyJones",
"nahinahi9",
"Dustin Chen",
"dan",
"Yaboi",
"Mouthlessman",
"Steam Steam",
"Damon Cunliffe",
"CryptoTraderJK",
"Davaitamin",
"otaku fra",
"Ran C",
"tedcor",
"Fotek Design",
"Adam Taylor",
"Weird_With_A_Beard",
"MadSpin",
"Pozadine1",
"Qarob",
"AIGooner",
"inbijiburu",
"Luc",
"ProtonPrince",
"DiffDuck",
"elu3199",
"Nick “Loadstone” D",
"Hasturkun",
"Jon Sandman",
"Ubivis",
"CloudValley",
"thesoftwaredruid",
"wundershark",
"mr_dinosaur",
"linnfrey",
"Gamalonia",
"Vir",
"Pkrsky",
"Joboshy",
"Bohemian Corporal",
"Dan",
"Josef Lanzl",
"Seth Christensen",
"Griffin Dahlberg",
"Draven T",
"yer fey",
"Error_Rule34_Not_found",
"Gerald Welly",
"Roslynd",
"Geolog",
"jinxedx",
"Neco28",
"Aquatic Coffee",
"Dankin",
"ethanfel",
"Cristian Vazquez",
"Frank Nitty",
"Magic Noob",
"Focuschannel",
"DougPeterson",
"Jeff",
"Bruce",
"Kevin John Duck",
"Anthony Faxlandez",
"Kevin Christopher",
"Ouro Boros",
"Blackfish95",
"dd",
"Paul Kroll",
"MiraiKuriyamaSy",
"semicolon drainpipe",
"Thesharingbrother",
"Bas Imagineer",
"Pat Hen",
"John Statham",
"ResidentDeviant",
"Nihongasuki",
"JC",
"Prompt Pirate",
"uwutismxd",
"decoy",
"Tyrswood",
"Ray Wing",
"Ranzitho",
"Gus",
"地獄の禄",
"MJG",
"David LaVallee",
"ae",
"Tr4shP4nda",
"WRL_SPR",
"capn",
"Joseph",
"Mirko Katzula",
"dan",
"Piccio08",
"kumakichi",
"cppbel",
"starbugx",
"Moon Knight",
"몽타주",
"Kland",
"zenobeus",
"Jackthemind",
"ryoma",
"Stryker",
"raf8osz",
"ElitaSSJ4",
"blikkies",
"Chris",
"Brian M",
"Nerezza",
"sanborondon",
"Taylor Funk",
"aezin",
"Thought2Form",
"jcay015",
"Kevin Picco",
"Erik Lopez",
"Shock Shockor",
"Mateo Curić",
"Goldwaters",
"Zude",
"Eris3D",
"m",
"Pierce McBride",
"Joshua Gray",
"Kyler",
"Mikko Hemilä",
"aRtFuL_DodGeR",
"Jamie Ogletree",
"a _",
"James Coleman",
"CrimsonDX",
"Martial",
"battu",
"Emil Andersson",
"Chad Idk",
"DarkSunset",
"Billy Gladky",
"Yuji Kaneko",
"Probis",
"Dušan Ryban",
"ItsGeneralButtNaked",
"Jordan Shaw",
"Rops Alot",
"Sam",
"sjon kreutz",
"Nimess",
"SRDB",
"Ace Ventura",
"g unit",
"Youguang",
"Metryman55",
"andrewzpong",
"FrxzenSnxw",
"BossGame",
"lrdchs",
"momokai",
"Hailshem",
"kudari",
"Naomi Hale Danchi",
"dc7431",
"ken",
"Inversity",
"AIVORY3D",
"epicgamer0020690",
"Joshua Porrata",
"keemun",
"SuBu",
"RedPIXel",
"Kevinj",
"Wind",
"Nexus",
"Ramneek“Guy”Ashok",
"squid_actually",
"Nat_20",
"Edward Weeks",
"kyoumei",
"RadStorm04",
"JohnDoe42054",
"BillyHill",
"emyth",
"chriphost",
"KitKatM",
"socrasteeze",
"ResidentDeviant",
"gzmzmvp",
"Welkor",
"John Martin",
"Richard",
"Andrew",
"Robert Wegemund",
"Littlehuggy",
"moranqianlong",
"Gregory Kozhemiak",
"mrjuan",
"Brian Buie",
"Sadlip",
"Haru Yotu",
"Eric Whitney",
"Joey Callahan",
"Ivan Tadic",
"Mike Simone",
"Morgandel",
"Kyron Mahan",
"Matura Arbeit",
"Noah",
"Jacob McDaniel",
"X",
"Sloan Steddy",
"TBitz33",
"Anonym dkjglfleeoeldldldlkf",
"Temikus",
"Artokun",
"Michael Taylor",
"SendingRavens",
"Derek Baker",
"Michael Anthony Scott",
"Atilla Berke Pekduyar",
"Michael Docherty",
"Nathan",
"Decx _",
"Paul Hartsuyker",
"elitassj",
"Jacob Winter",
"Distortik",
"David",
"Meilo",
"Pen Bouryoung",
"四糸凜音",
"shinonomeiro",
"Snille",
"MaartenAlbers",
"khanh duy",
"xybrightsummer",
"jreedatchison",
"PhilW",
"Tree Tagger",
"Janik",
"Crocket",
"Cruel",
"MRBlack",
"Mitchell Robson",
"Kiyoe",
"humptynutz",
"michael.isaza",
"Kalnei",
"Whitepinetrader",
"OrganicArtifact",
"Scott",
"MudkipMedkitz",
"deanbrian",
"POPPIN",
"Alex Wortman",
"Cody",
"Raku",
"smart.edge5178",
"emadsultan",
"InformedViewz",
"CHKeeho80",
"Bubbafett",
"leaf",
"Menard",
"Skyfire83",
"Adam Rinehart",
"D",
"Pitpe11",
"TheD1rtyD03",
"moonpetal",
"SomeDude",
"g9p0o",
"nanana",
"TheHolySheep",
"Monte Won",
"SpringBootisTrash",
"carsten",
"ikok",
"Buecyb99",
"4IXplr0r3r",
"dfklsjfkljslfjd",
"hayden",
"ahoystan",
"Leland Saunders",
"Wolfe7D1",
"Ink Temptation",
"Bob Barker",
"edk",
"Kalli Core",
"Aeternyx",
"elleshar666",
"YOU SINWOO",
"ja s",
"Doug Mason",
"Kauffy",
"Jeremy Townsend",
"EpicElric",
"Sean voets",
"Owen Gwosdz",
"John J Linehan",
"Elliot E",
"Thomas Wanner",
"Theerat Jiramate",
"Edward Kennedy",
"Justin Blaylock",
"Devil Lude",
"Nick Kage",
"kevin stoddard",
"Jack Dole",
"Vane Holzer",
"psytrax",
"Ezokewn",
"hexxish",
"CptNeo",
"notedfakes",
"Maso",
"Eric Ketchum",
"NICHOLAS BAXLEY",
"Michael Scott",
"Kevin Wallace",
"Matheus Couto",
"Saya",
"ChicRic",
"mercur",
"J C",
"Ed Wang",
"Ryan Presley Ng",
"Wes Sims",
"Donor4115",
"Yves Poezevara",
"Teriak47",
"Just me",
"Raf Stahelin",
"Вячеслав Маринин",
"Lyavph",
"Filippo Ferrari",
"Cola Matthew",
"OniNoKen",
"Iain Wisely",
"Zertens",
"NOHOW",
"Apo",
"nekotxt",
"choowkee",
"Clusters",
"ibrahim",
"Highlandrise",
"philcoraz",
"mztn",
"ImagineerNL",
"MrAcrtosSursus",
"al300680",
"pixl",
"Robin",
"chahknoir",
"Marcus thronico",
"nd",
"keno94d",
"James Melzer",
"Bartleby",
"Renvertere",
"Rahuy",
"Hermann003",
"D",
"Foolish",
"RevyHiep",
"Captain_Swag",
"obkircher",
"gwyar",
"D",
"edgecase",
"Neoxena",
"mrmhalo",
"dg",
"Maarten Harms",
"Israel",
"Muratoraccio",
"SelfishMedic",
"Ginnie",
"adderleighn",
"EnragedAntelope",
"Alan+Cano",
"FeralOpticsAI",
"Pavlaki",
"generic404",
"Mateusz+Kosela",
"Doug+Rintoul",
"Noor",
"Yorunai",
"Bula",
"quantenmecha",
"abattoirblues",
"Jason+Nash",
"BillyBoy84",
"DarkRoast",
"zounik",
"letzte",
"Nasty+Hobbit",
"SgtFluffles",
"lrdchs2",
"Duk3+Rand0m",
"KUJYAKU",
"NathenChoi",
"Thomas+Reck",
"Larses",
"cocona",
"Coeur+de+cochon",
"David Schenck",
"han b",
"Nico",
"Banana Joe",
"_ G3n",
"Donovan Jenkins",
"JBsuede",
"Michael Eid",
"beersandbacon",
"Maximilian Pyko",
"Invis",
"Justin Houston",
"Time Valentine",
"james",
"OrochiNights",
"Michael Zhu",
"ACTUALLY_the_Real_Willem_Dafoe",
"gonzalo",
"Seraphy",
"Михал Михалыч",
"雨の心 落",
"Matt",
"AllTimeNoobie",
"jumpd",
"John C",
"Rim",
"Dismem",
"Frogmilk",
"SPJ",
"Xan Dionysus",
"Nathan lee",
"Mewtora",
"Middo",
"Forbidden Atelier",
"Bryan Rutkowski",
"Adictedtohumping",
"Towelie",
"Cyrus Fett",
"Jean-françois SEMA",
"Kurt",
"max blo",
"Xenon Xue",
"JackJohnnyJim",
"Edward Ten Eyck",
"Chase Kwon",
"Inyoshu",
"Goober719",
"Chad Barnes",
"James Ming",
"vanditking",
"kripitonga",
"Rizzi",
"nimin",
"OMAR LUCIANO",
"hannibal",
"Jo+Example",
"BrentBertram",
"eumelzocker",
"dxjaymz",
"L C",
"Dude"
],
"totalCount": 666
}

View File

@@ -0,0 +1,363 @@
# metadata.json Schema Documentation
This document defines the complete schema for `.metadata.json` files used by Lora Manager. These sidecar files store model metadata alongside model files (LoRA, Checkpoint, Embedding).
## Overview
- **File naming**: `<model_name>.metadata.json` (e.g., `my_lora.safetensors``my_lora.metadata.json`)
- **Format**: JSON with UTF-8 encoding
- **Purpose**: Store model metadata, tags, descriptions, preview images, and Civitai/CivArchive integration data
- **Extensibility**: Unknown fields are preserved via `_unknown_fields` mechanism for forward compatibility
---
## Base Fields (All Model Types)
These fields are present in all model metadata files.
| Field | Type | Required | Auto-Updated | Description |
|-------|------|----------|--------------|-------------|
| `file_name` | string | ✅ Yes | ✅ Yes | Filename without extension (e.g., `"my_lora"`) |
| `model_name` | string | ✅ Yes | ❌ No | Display name of the model. **Default**: `file_name` if no other source |
| `file_path` | string | ✅ Yes | ✅ Yes | Full absolute path to the model file (normalized with `/` separators) |
| `size` | integer | ✅ Yes | ❌ No | File size in bytes. **Set at**: Initial scan or download completion. Does not change thereafter. |
| `modified` | float | ✅ Yes | ❌ No | **Import timestamp** — Unix timestamp when the model was first imported/added to the system. Used for "Date Added" sorting. Does not change after initial creation. |
| `sha256` | string | ⚠️ Conditional | ✅ Yes | SHA256 hash of the model file (lowercase). **LoRA**: Required. **Checkpoint**: May be empty when `hash_status="pending"` (lazy hash calculation) |
| `base_model` | string | ❌ No | ❌ No | Base model type. **Examples**: `"SD 1.5"`, `"SDXL 1.0"`, `"SDXL Lightning"`, `"Flux.1 D"`, `"Flux.1 S"`, `"Flux.1 Krea"`, `"Illustrious"`, `"Pony"`, `"AuraFlow"`, `"Kolors"`, `"ZImageTurbo"`, `"Wan Video"`, etc. **Default**: `"Unknown"` or `""` |
| `preview_url` | string | ❌ No | ✅ Yes | Path to preview image file |
| `preview_nsfw_level` | integer | ❌ No | ❌ No | NSFW level using **bitmask values** from Civitai: `1` (PG), `2` (PG13), `4` (R), `8` (X), `16` (XXX), `32` (Blocked). **Default**: `0` (none) |
| `notes` | string | ❌ No | ❌ No | User-defined notes |
| `from_civitai` | boolean | ❌ No (default: `true`) | ❌ No | Whether the model originated from Civitai |
| `civitai` | object | ❌ No | ⚠️ Partial | Civitai/CivArchive API data and user-defined fields |
| `tags` | array[string] | ❌ No | ⚠️ Partial | Model tags (merged from API and user input) |
| `modelDescription` | string | ❌ No | ⚠️ Partial | Full model description (from API or user) |
| `civitai_deleted` | boolean | ❌ No (default: `false`) | ❌ No | Whether the model was deleted from Civitai |
| `favorite` | boolean | ❌ No (default: `false`) | ❌ No | Whether the model is marked as favorite |
| `exclude` | boolean | ❌ No (default: `false`) | ❌ No | Whether to exclude from cache/scanning. User can set from `false` to `true` (currently no UI to revert) |
| `db_checked` | boolean | ❌ No (default: `false`) | ❌ No | Whether checked against archive database |
| `skip_metadata_refresh` | boolean | ❌ No (default: `false`) | ❌ No | Skip this model during bulk metadata refresh |
| `metadata_source` | string\|null | ❌ No | ✅ Yes | Last provider that supplied metadata (see below) |
| `last_checked_at` | float | ❌ No (default: `0`) | ✅ Yes | Unix timestamp of last metadata check |
| `hash_status` | string | ❌ No (default: `"completed"`) | ✅ Yes | Hash calculation status: `"pending"`, `"calculating"`, `"completed"`, `"failed"` |
---
## Model-Specific Fields
### LoRA Models
LoRA models do not have a `model_type` field in metadata.json. The type is inferred from context or `civitai.type` (e.g., `"LoRA"`, `"LoCon"`, `"DoRA"`).
| Field | Type | Required | Auto-Updated | Description |
|-------|------|----------|--------------|-------------|
| `usage_tips` | string (JSON) | ❌ No (default: `"{}"`) | ❌ No | JSON string containing recommended usage parameters |
**`usage_tips` JSON structure:**
```json
{
"strength_min": 0.3,
"strength_max": 0.8,
"strength_range": "0.3-0.8",
"strength": 0.6,
"clip_strength": 0.5,
"clip_skip": 2
}
```
| Key | Type | Description |
|-----|------|-------------|
| `strength_min` | number | Minimum recommended model strength |
| `strength_max` | number | Maximum recommended model strength |
| `strength_range` | string | Human-readable strength range |
| `strength` | number | Single recommended strength value |
| `clip_strength` | number | Recommended CLIP/embedding strength |
| `clip_skip` | integer | Recommended CLIP skip value |
---
### Checkpoint Models
| Field | Type | Required | Auto-Updated | Description |
|-------|------|----------|--------------|-------------|
| `model_type` | string | ❌ No (default: `"checkpoint"`) | ❌ No | Model type: `"checkpoint"`, `"diffusion_model"` |
---
### Embedding Models
| Field | Type | Required | Auto-Updated | Description |
|-------|------|----------|--------------|-------------|
| `model_type` | string | ❌ No (default: `"embedding"`) | ❌ No | Model type: `"embedding"` |
---
## The `civitai` Field Structure
The `civitai` object stores the complete Civitai/CivArchive API response. Lora Manager preserves all fields from the API for future compatibility and extracts specific fields for use in the application.
### Version-Level Fields (Civitai API)
**Fields Used by Lora Manager:**
| Field | Type | Description |
|-------|------|-------------|
| `id` | integer | Version ID |
| `modelId` | integer | Parent model ID |
| `name` | string | Version name (e.g., `"v1.0"`, `"v2.0-pruned"`) |
| `nsfwLevel` | integer | NSFW level (bitmask: 1=PG, 2=PG13, 4=R, 8=X, 16=XXX, 32=Blocked) |
| `baseModel` | string | Base model (e.g., `"SDXL 1.0"`, `"Flux.1 D"`, `"Illustrious"`, `"Pony"`) |
| `trainedWords` | array[string] | **Trigger words** for the model |
| `type` | string | Model type (`"LoRA"`, `"Checkpoint"`, `"TextualInversion"`) |
| `earlyAccessEndsAt` | string\|null | Early access end date (used for update notifications) |
| `description` | string | Version description (HTML) |
| `model` | object | Parent model object (see Model-Level Fields below) |
| `creator` | object | Creator information (see Creator Fields below) |
| `files` | array[object] | File list with hashes, sizes, download URLs (used for metadata extraction) |
| `images` | array[object] | Image list with metadata, prompts, NSFW levels (used for preview/examples) |
**Fields Stored but Not Currently Used:**
| Field | Type | Description |
|-------|------|-------------|
| `createdAt` | string (ISO 8601) | Creation timestamp |
| `updatedAt` | string (ISO 8601) | Last update timestamp |
| `status` | string | Version status (e.g., `"Published"`, `"Draft"`) |
| `publishedAt` | string (ISO 8601) | Publication timestamp |
| `baseModelType` | string | Base model type (e.g., `"Standard"`, `"Inpaint"`, `"Refiner"`) |
| `earlyAccessConfig` | object | Early access configuration |
| `uploadType` | string | Upload type (`"Created"`, `"FineTuned"`, etc.) |
| `usageControl` | string | Usage control setting |
| `air` | string | Artifact ID (URN format: `urn:air:sdxl:lora:civitai:122359@135867`) |
| `stats` | object | Download count, ratings, thumbs up count |
| `videos` | array[object] | Video list |
| `downloadUrl` | string | Direct download URL |
| `trainingStatus` | string\|null | Training status (for on-site training) |
| `trainingDetails` | object\|null | Training configuration |
### Model-Level Fields (`civitai.model.*`)
**Fields Used by Lora Manager:**
| Field | Type | Description |
|-------|------|-------------|
| `name` | string | Model name |
| `type` | string | Model type (`"LoRA"`, `"Checkpoint"`, `"TextualInversion"`) |
| `description` | string | Model description (HTML, used for `modelDescription`) |
| `tags` | array[string] | Model tags (used for `tags` field) |
| `allowNoCredit` | boolean | License: allow use without credit |
| `allowCommercialUse` | array[string] | License: allowed commercial uses. **Values**: `"Image"` (sell generated images), `"Video"` (sell generated videos), `"RentCivit"` (rent on Civitai), `"Rent"` (rent elsewhere) |
| `allowDerivatives` | boolean | License: allow derivatives |
| `allowDifferentLicense` | boolean | License: allow different license |
**Fields Stored but Not Currently Used:**
| Field | Type | Description |
|-------|------|-------------|
| `nsfw` | boolean | Model NSFW flag |
| `poi` | boolean | Person of Interest flag |
### Creator Fields (`civitai.creator.*`)
Both fields are used by Lora Manager:
| Field | Type | Description |
|-------|------|-------------|
| `username` | string | Creator username (used for author display and search) |
| `image` | string | Creator avatar URL (used for display) |
### Model Type Field (Top-Level, Outside `civitai`)
| Field | Type | Values | Description |
|-------|------|--------|-------------|
| `model_type` | string | `"checkpoint"`, `"diffusion_model"`, `"embedding"` | Stored in metadata.json for Checkpoint and Embedding models. **Note**: LoRA models do not have this field; type is inferred from `civitai.type` or context. |
### User-Defined Fields (Within `civitai`)
For models not from Civitai or user-added data:
| Field | Type | Description |
|-------|------|-------------|
| `trainedWords` | array[string] | **Trigger words** — manually added by user |
| `customImages` | array[object] | Custom example images added by user |
### customImages Structure
Each custom image entry has the following structure:
```json
{
"url": "",
"id": "short_id",
"nsfwLevel": 0,
"width": 832,
"height": 1216,
"type": "image",
"meta": {
"prompt": "...",
"negativePrompt": "...",
"steps": 20,
"cfgScale": 7,
"seed": 123456
},
"hasMeta": true,
"hasPositivePrompt": true
}
```
| Field | Type | Description |
|-------|------|-------------|
| `url` | string | Empty for local custom images |
| `id` | string | Short ID or filename |
| `nsfwLevel` | integer | NSFW level (bitmask) |
| `width` | integer | Image width in pixels |
| `height` | integer | Image height in pixels |
| `type` | string | `"image"` or `"video"` |
| `meta` | object\|null | Generation metadata (prompt, seed, etc.) extracted from image |
| `hasMeta` | boolean | Whether metadata is available |
| `hasPositivePrompt` | boolean | Whether a positive prompt is available |
### Minimal Non-Civitai Example
```json
{
"civitai": {
"trainedWords": ["my_trigger_word"]
}
}
```
### Non-Civitai Example Without Trigger Words
```json
{
"civitai": {}
}
```
### Example: User-Added Custom Images
```json
{
"civitai": {
"trainedWords": ["custom_style"],
"customImages": [
{
"url": "",
"id": "example_1",
"nsfwLevel": 0,
"width": 832,
"height": 1216,
"type": "image",
"meta": {
"prompt": "example prompt",
"seed": 12345
},
"hasMeta": true,
"hasPositivePrompt": true
}
]
}
}
```
---
## Metadata Source Values
The `metadata_source` field indicates which provider last updated the metadata:
| Value | Source |
|-------|--------|
| `"civitai_api"` | Civitai API |
| `"civarchive"` | CivArchive API |
| `"archive_db"` | Metadata Archive Database |
| `null` | No external source (user-defined only) |
---
## Auto-Update Behavior
### Fields Updated During Scanning
These fields are automatically synchronized with the filesystem:
- `file_name` — Updated if actual filename differs
- `file_path` — Normalized and updated if path changes
- `preview_url` — Updated if preview file is moved/removed
- `sha256` — Updated during hash calculation (when `hash_status="pending"`)
- `hash_status` — Updated during hash calculation
- `last_checked_at` — Timestamp of scan
- `metadata_source` — Set based on metadata provider
### Fields Set Once (Immutable After Import)
These fields are set when the model is first imported/scanned and **never change** thereafter:
- `modified` — Import timestamp (used for "Date Added" sorting)
- `size` — File size at time of import/download
### User-Editable Fields
These fields can be edited by users at any time through the Lora Manager UI or by manually editing the metadata.json file:
- `model_name` — Display name
- `tags` — Model tags
- `modelDescription` — Model description
- `notes` — User notes
- `favorite` — Favorite flag
- `exclude` — Exclude from scanning (user can set `false``true`, currently no UI to revert)
- `skip_metadata_refresh` — Skip during bulk refresh
- `civitai.trainedWords` — Trigger words
- `civitai.customImages` — Custom example images
- `usage_tips` — Usage recommendations (LoRA only)
---
## Field Reference by Behavior
### Required Fields (Must Always Exist)
- `file_name`
- `model_name` (defaults to `file_name` if not provided)
- `file_path`
- `size`
- `modified`
- `sha256` (LoRA: always required; Checkpoint: may be empty when `hash_status="pending"`)
### Optional Fields with Defaults
| Field | Default |
|-------|---------|
| `base_model` | `"Unknown"` or `""` |
| `preview_nsfw_level` | `0` |
| `from_civitai` | `true` |
| `civitai` | `{}` |
| `tags` | `[]` |
| `modelDescription` | `""` |
| `notes` | `""` |
| `civitai_deleted` | `false` |
| `favorite` | `false` |
| `exclude` | `false` |
| `db_checked` | `false` |
| `skip_metadata_refresh` | `false` |
| `metadata_source` | `null` |
| `last_checked_at` | `0` |
| `hash_status` | `"completed"` |
| `usage_tips` | `"{}"` (LoRA only) |
| `model_type` | `"checkpoint"` or `"embedding"` (not present in LoRA models) |
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | 2026-03 | Initial schema documentation |
---
## See Also
- [JSON Schema Definition](../.specs/metadata.schema.json) — Formal JSON Schema for validation

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "Abbrechen",
"confirm": "Bestätigen",
"actions": {
"save": "Speichern",
"cancel": "Abbrechen",
"confirm": "Bestätigen",
"delete": "Löschen",
"move": "Verschieben",
"refresh": "Aktualisieren",
@@ -11,7 +14,9 @@
"backToTop": "Nach oben",
"settings": "Einstellungen",
"help": "Hilfe",
"add": "Hinzufügen"
"add": "Hinzufügen",
"close": "Schließen",
"menu": "Menü"
},
"status": {
"loading": "Wird geladen...",
@@ -171,6 +176,9 @@
"success": "{count} Rezepte erfolgreich repariert.",
"cancelled": "Reparatur abgebrochen. {count} Rezepte wurden repariert.",
"error": "Recipe-Reparatur fehlgeschlagen: {message}"
},
"manageExcludedModels": {
"label": "Ausgeschlossene Modelle verwalten"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "Voreinstellung \"{name}\" existiert bereits. Überschreiben?",
"presetNamePlaceholder": "Voreinstellungsname...",
"baseModel": "Basis-Modell",
"baseModelSearchPlaceholder": "Basismodelle durchsuchen...",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
"modelTypes": "Modelltypen",
"license": "Lizenz",
"noCreditRequired": "Kein Credit erforderlich",
"allowSellingGeneratedContent": "Verkauf erlaubt",
"noTags": "Keine Tags",
"noBaseModelMatches": "Keine Basismodelle entsprechen der aktuellen Suche.",
"clearAll": "Alle Filter löschen",
"any": "Beliebig",
"all": "Alle",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai API Key",
"civitaiApiKeyPlaceholder": "Geben Sie Ihren Civitai API Key ein",
"civitaiApiKeyHelp": "Wird für die Authentifizierung beim Herunterladen von Modellen von Civitai verwendet",
"civitaiHost": {
"label": "Civitai-Host",
"help": "Wählen Sie aus, welche Civitai-Seite geöffnet wird, wenn Sie „View on Civitai“-Links verwenden.",
"options": {
"com": "civitai.com (nur SFW)",
"red": "civitai.red (uneingeschränkt)"
}
},
"downloadBackend": {
"label": "Download-Backend",
"help": "Wähle aus, wie Modelldateien heruntergeladen werden. Python verwendet den eingebauten Downloader. aria2 verwendet den experimentellen externen Downloader-Prozess.",
"options": {
"python": "Python (integriert)",
"aria2": "aria2 (experimentell)"
}
},
"aria2cPath": {
"label": "aria2c-Pfad",
"help": "Optionaler Pfad zur ausführbaren aria2c-Datei. Leer lassen, um aria2c aus dem System-PATH zu verwenden.",
"placeholder": "Leer lassen, um aria2c aus dem PATH zu verwenden"
},
"aria2HelpLink": "Erfahren Sie, wie Sie das aria2-Download-Backend einrichten",
"civitaiHostBanner": {
"title": "Civitai-Host-Einstellung verfügbar",
"content": "Civitai verwendet jetzt civitai.com für SFW-Inhalte und civitai.red für uneingeschränkte Inhalte. In den Einstellungen können Sie ändern, welche Seite standardmäßig geöffnet wird.",
"openSettings": "Einstellungen öffnen"
},
"openSettingsFileLocation": {
"label": "Einstellungsordner öffnen",
"tooltip": "Den Ordner mit der settings.json öffnen",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "Inhaltsfilterung",
"downloads": "Downloads",
"videoSettings": "Video-Einstellungen",
"layoutSettings": "Layout-Einstellungen",
"misc": "Verschiedenes",
"backup": "Backups",
"folderSettings": "Standard-Roots",
"recipeSettings": "Rezepte",
"extraFolderPaths": "Zusätzliche Ordnerpfade",
"downloadPathTemplates": "Download-Pfad-Vorlagen",
"priorityTags": "Prioritäts-Tags",
@@ -287,7 +327,15 @@
"blurNsfwContent": "NSFW-Inhalte unscharf stellen",
"blurNsfwContentHelp": "Nicht jugendfreie (NSFW) Vorschaubilder unscharf stellen",
"showOnlySfw": "Nur SFW-Ergebnisse anzeigen",
"showOnlySfwHelp": "Alle NSFW-Inhalte beim Durchsuchen und Suchen herausfiltern"
"showOnlySfwHelp": "Alle NSFW-Inhalte beim Durchsuchen und Suchen herausfiltern",
"matureBlurThreshold": "Schwelle für Unschärfe bei jugendgefährdenden Inhalten",
"matureBlurThresholdHelp": "Legen Sie fest, ab welcher Altersfreigabe die Unschärfe beginnt, wenn NSFW-Unschärfe aktiviert ist.",
"matureBlurThresholdOptions": {
"pg13": "PG13 und höher",
"r": "R und höher (Standard)",
"x": "X und höher",
"xxx": "Nur XXX"
}
},
"videoSettings": {
"autoplayOnHover": "Videos bei Hover automatisch abspielen",
@@ -311,6 +359,54 @@
"saveFailed": "Übersprungene Pfade konnten nicht gespeichert werden: {message}"
}
},
"backup": {
"autoEnabled": "Automatische Backups",
"autoEnabledHelp": "Erstellt einmal täglich einen lokalen Schnappschuss und behält die neuesten Schnappschüsse gemäß der Aufbewahrungsrichtlinie.",
"retention": "Aufbewahrungsanzahl",
"retentionHelp": "Wie viele automatische Schnappschüsse behalten werden, bevor ältere entfernt werden.",
"management": "Backup-Verwaltung",
"managementHelp": "Exportiere deinen aktuellen Benutzerstatus oder stelle ihn aus einem Backup-Archiv wieder her.",
"scopeHelp": "Sichert deine Einstellungen, den Downloadverlauf und den Status der Modellaktualisierung. Modelldateien und neu erzeugbare Caches sind nicht enthalten.",
"locationSummary": "Aktueller Backup-Speicherort",
"openFolderButton": "Backup-Ordner öffnen",
"openFolderSuccess": "Backup-Ordner geöffnet",
"openFolderFailed": "Backup-Ordner konnte nicht geöffnet werden",
"locationCopied": "Backup-Pfad in die Zwischenablage kopiert: {{path}}",
"locationClipboardFallback": "Backup-Pfad: {{path}}",
"exportButton": "Backup exportieren",
"exportSuccess": "Backup erfolgreich exportiert.",
"exportFailed": "Backup konnte nicht exportiert werden: {message}",
"importButton": "Backup importieren",
"importConfirm": "Dieses Backup importieren und den lokalen Benutzerstatus überschreiben?",
"importSuccess": "Backup erfolgreich importiert.",
"importFailed": "Backup konnte nicht importiert werden: {message}",
"latestSnapshot": "Neuester Schnappschuss",
"latestAutoSnapshot": "Neuester automatischer Schnappschuss",
"snapshotCount": "Gespeicherte Schnappschüsse",
"noneAvailable": "Noch keine Schnappschüsse vorhanden"
},
"downloadSkipBaseModels": {
"label": "Downloads für Basismodelle überspringen",
"help": "Gilt für alle Download-Abläufe. Hier können nur unterstützte Basismodelle ausgewählt werden.",
"searchPlaceholder": "Basismodelle filtern...",
"empty": "Keine Basismodelle entsprechen der aktuellen Suche.",
"summary": {
"none": "Nichts ausgewählt",
"count": "{count} ausgewählt"
},
"actions": {
"edit": "Bearbeiten",
"collapse": "Einklappen",
"clear": "Löschen"
},
"validation": {
"saveFailed": "Ausgeschlossene Basismodelle konnten nicht gespeichert werden: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "Bereits heruntergeladene Modellversionen überspringen",
"help": "Wenn aktiviert, überspringt LoRA Manager den Download einer Modellversion, wenn der Download-Verlaufsdienst diese spezifische Version als bereits heruntergeladen erfasst hat. Gilt für alle Download-Abläufe."
},
"layoutSettings": {
"displayDensity": "Anzeige-Dichte",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "Bei Hover anzeigen"
},
"cardInfoDisplayHelp": "Wählen Sie, wann Modellinformationen und Aktionsschaltflächen angezeigt werden sollen",
"showVersionOnCard": "Version auf Karte anzeigen",
"showVersionOnCardHelp": "Den Versionsnamen auf Modellkarten ein- oder ausblenden",
"modelCardFooterAction": "Aktion der Modellkarten-Schaltfläche",
"modelCardFooterActionOptions": {
"exampleImages": "Beispielbilder öffnen",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "Legen Sie den Standard-Diffusion-Modell-(UNET)-Stammordner für Downloads, Importe und Verschiebungen fest",
"defaultEmbeddingRoot": "Embedding-Stammordner",
"defaultEmbeddingRootHelp": "Legen Sie den Standard-Embedding-Stammordner für Downloads, Importe und Verschiebungen fest",
"recipesPath": "Rezepte-Speicherpfad",
"recipesPathHelp": "Optionales benutzerdefiniertes Verzeichnis für gespeicherte Rezepte. Leer lassen, um den recipes-Ordner im ersten LoRA-Stammverzeichnis zu verwenden.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "Rezepte-Speicher wird verschoben...",
"noDefault": "Kein Standard"
},
"extraFolderPaths": {
"title": "Zusätzliche Ordnerpfade",
"description": "Zusätzliche Modellstammverzeichnisse, die ausschließlich für LoRA Manager gelten. Laden Sie Modelle von Speicherorten außerhalb der Standardordner von ComfyUI ideal für große Bibliotheken, die ComfyUI sonst verlangsamen würden.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "LoRA-Pfade",
"checkpoint": "Checkpoint-Pfade",
"unet": "Diffusionsmodell-Pfade",
"embedding": "Embedding-Pfade"
},
"pathPlaceholder": "/pfad/zu/extra/modellen",
"saveSuccess": "Zusätzliche Ordnerpfade aktualisiert. Neustart erforderlich, um Änderungen anzuwenden.",
"saveError": "Fehler beim Aktualisieren der zusätzlichen Ordnerpfade: {message}",
"validation": {
"duplicatePath": "Dieser Pfad ist bereits konfiguriert"
}
},
"priorityTags": {
"title": "Prioritäts-Tags",
"description": "Passen Sie die Tag-Prioritätsreihenfolge für jeden Modelltyp an (z. B. character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Geben Sie den Ordnerpfad ein, wo Beispielbilder von Civitai gespeichert werden",
"autoDownload": "Beispielbilder automatisch herunterladen",
"autoDownloadHelp": "Beispielbilder automatisch für Modelle herunterladen, die keine haben (erfordert gesetzten Download-Speicherort)",
"openMode": "Aktion für Beispielbilder öffnen",
"openModeHelp": "Wählen Sie, ob die Aktion auf dem Server geöffnet, ein zugeordneter lokaler Pfad kopiert oder eine benutzerdefinierte URI gestartet werden soll.",
"openModeOptions": {
"system": "Auf Server öffnen",
"clipboard": "Lokalen Pfad kopieren",
"uriTemplate": "Benutzerdefinierte URI öffnen"
},
"localRoot": "Lokales Stammverzeichnis für Beispielbilder",
"localRootHelp": "Optionales lokales oder eingebundenes Stammverzeichnis, das das Beispielbild-Verzeichnis des Servers widerspiegelt. Wenn leer, wird der Serverpfad wiederverwendet.",
"localRootPlaceholder": "Beispiel: /Volumes/ComfyUI/example_images",
"uriTemplate": "URI-Vorlage öffnen",
"uriTemplateHelp": "Verwenden Sie einen benutzerdefinierten Deeplink wie eine Datei-URI oder einen Shortcuts-Link.",
"uriTemplatePlaceholder": "Beispiel: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "Verfügbare Platzhalter: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "Mehr über Remote-Open-Modi erfahren",
"optimizeImages": "Heruntergeladene Bilder optimieren",
"optimizeImagesHelp": "Beispielbilder optimieren, um Dateigröße zu reduzieren und Ladegeschwindigkeit zu verbessern (Metadaten bleiben erhalten)",
"download": "Herunterladen",
@@ -485,23 +619,6 @@
"proxyPassword": "Passwort (optional)",
"proxyPasswordPlaceholder": "passwort",
"proxyPasswordHelp": "Passwort für die Proxy-Authentifizierung (falls erforderlich)"
},
"extraFolderPaths": {
"title": "Zusätzliche Ordnerpfade",
"help": "Fügen Sie zusätzliche Modellordner außerhalb der Standardpfade von ComfyUI hinzu. Diese Pfade werden separat gespeichert und zusammen mit den Standardordnern gescannt.",
"description": "Konfigurieren Sie zusätzliche Ordner zum Scannen von Modellen. Diese Pfade sind spezifisch für LoRA Manager und werden mit den Standardpfaden von ComfyUI zusammengeführt.",
"modelTypes": {
"lora": "LoRA-Pfade",
"checkpoint": "Checkpoint-Pfade",
"unet": "Diffusionsmodell-Pfade",
"embedding": "Embedding-Pfade"
},
"pathPlaceholder": "/pfad/zu/extra/modellen",
"saveSuccess": "Zusätzliche Ordnerpfade aktualisiert.",
"saveError": "Fehler beim Aktualisieren der zusätzlichen Ordnerpfade: {message}",
"validation": {
"duplicatePath": "Dieser Pfad ist bereits konfiguriert"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "Automatisch organisieren",
"skipMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle überspringen",
"resumeMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle fortsetzen",
"deleteAll": "Alle Modelle löschen",
"deleteAll": "Ausgewählte löschen",
"downloadMissingLoras": "Fehlende LoRAs herunterladen",
"clear": "Auswahl löschen",
"skipMetadataRefreshCount": "Überspringen{count} Modelle",
"resumeMetadataRefreshCount": "Fortsetzen{count} Modelle",
@@ -600,6 +718,7 @@
"moveToFolder": "In Ordner verschieben",
"repairMetadata": "Metadaten reparieren",
"excludeModel": "Modell ausschließen",
"restoreModel": "Modell wiederherstellen",
"deleteModel": "Modell löschen",
"shareRecipe": "Rezept teilen",
"viewAllLoras": "Alle LoRAs anzeigen",
@@ -641,6 +760,8 @@
"root": "Stammverzeichnis",
"browseFolders": "Ordner durchsuchen:",
"downloadAndSaveRecipe": "Herunterladen & Rezept speichern",
"importRecipeOnly": "Nur Rezept importieren",
"importAndDownload": "Importieren & Herunterladen",
"downloadMissingLoras": "Fehlende LoRAs herunterladen",
"saveRecipe": "Rezept speichern",
"loraCountInfo": "({existing}/{total} in Bibliothek)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "Wenigste"
},
"refresh": {
"title": "Rezeptliste aktualisieren"
"title": "Rezeptliste aktualisieren",
"quick": "Änderungen synchronisieren",
"quickTooltip": "Änderungen synchronisieren - schnelle Aktualisierung ohne Cache-Neubau",
"full": "Cache neu aufbauen",
"fullTooltip": "Cache neu aufbauen - vollständiger Rescan aller Rezeptdateien"
},
"filteredByLora": "Gefiltert nach LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "Rezept-Reparatur fehlgeschlagen: {message}",
"missingId": "Rezept kann nicht repariert werden: Fehlende Rezept-ID"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "In {otherType}-Ordner verschieben"
"moveToOtherTypeFolder": "In {otherType}-Ordner verschieben",
"sendToWorkflow": "An Workflow senden"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "Sidebar lösen",
"switchToListView": "Zur Listenansicht wechseln",
"switchToTreeView": "Zur Baumansicht wechseln",
"recursiveOn": "Unterordner durchsuchen",
"recursiveOff": "Nur aktuellen Ordner durchsuchen",
"recursiveOn": "Unterordner einbeziehen",
"recursiveOff": "Nur aktueller Ordner",
"recursiveUnavailable": "Rekursive Suche ist nur in der Baumansicht verfügbar",
"collapseAllDisabled": "Im Listenmodus nicht verfügbar",
"dragDrop": {
"unableToResolveRoot": "Zielpfad für das Verschieben konnte nicht ermittelt werden.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Verschieben wird für dieses Element nicht unterstützt.",
"createFolderHint": "Loslassen, um einen neuen Ordner zu erstellen",
"newFolderName": "Neuer Ordnername",
"folderNameHint": "Eingabetaste zum Bestätigen, Escape zum Abbrechen",
"emptyFolderName": "Bitte geben Sie einen Ordnernamen ein",
"invalidFolderName": "Ordnername enthält ungültige Zeichen",
"noDragState": "Kein ausstehender Ziehvorgang gefunden"
},
"empty": {
"noFolders": "Keine Ordner gefunden",
"dragHint": "Elemente hierher ziehen, um Ordner zu erstellen"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "Early Access",
"earlyAccessTooltip": "Early Access erforderlich",
"inLibrary": "In Bibliothek",
"downloaded": "Heruntergeladen",
"downloadedTooltip": "Zuvor heruntergeladen, aber derzeit nicht in Ihrer Bibliothek.",
"alreadyInLibrary": "Bereits in Bibliothek",
"autoOrganizedPath": "[Automatisch organisiert durch Pfadvorlage]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "Basis-Modell aktualisieren",
"cancel": "Abbrechen"
},
"bulkDownloadMissingLoras": {
"title": "Fehlende LoRAs herunterladen",
"message": "{uniqueCount} einzigartige fehlende LoRAs gefunden (von insgesamt {totalCount} in ausgewählten Rezepten).",
"previewTitle": "Zu herunterladende LoRAs:",
"moreItems": "...und {count} weitere",
"note": "Dateien werden mit Standard-Pfad-Vorlagen heruntergeladen. Dies kann je nach Anzahl der LoRAs eine Weile dauern.",
"downloadButton": "{count} LoRA(s) herunterladen"
},
"exampleAccess": {
"title": "Lokale Beispielbilder",
"message": "Keine lokalen Beispielbilder für dieses Modell gefunden. Ansichtsoptionen:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Auf Civitai anzeigen",
"viewOnCivitaiText": "Auf Civitai anzeigen",
"viewCreatorProfile": "Ersteller-Profil anzeigen",
"openFileLocation": "Dateispeicherort öffnen"
"openFileLocation": "Dateispeicherort öffnen",
"sendToWorkflow": "An ComfyUI senden",
"sendToWorkflowText": "An ComfyUI senden"
},
"openFileLocation": {
"success": "Dateispeicherort erfolgreich geöffnet",
@@ -964,6 +1170,9 @@
"copied": "Pfad in die Zwischenablage kopiert: {{path}}",
"clipboardFallback": "Pfad: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "Kann nicht an ComfyUI senden: Kein Dateipfad verfügbar"
},
"metadata": {
"version": "Version",
"fileName": "Dateiname",
@@ -1000,6 +1209,8 @@
"cancel": "Bearbeitung abbrechen",
"save": "Änderungen speichern",
"addPlaceholder": "Tippen zum Hinzufügen oder klicken Sie auf Vorschläge unten",
"editWord": "Trigger Word bearbeiten",
"editPlaceholder": "Trigger Word bearbeiten",
"copyWord": "Trigger Word kopieren",
"deleteWord": "Trigger Word löschen",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "in {count}d"
},
"badges": {
"current": "Aktuelle Version",
"current": "Geöffnete Version",
"currentTooltip": "Das ist die Version, mit der dieses Modal geöffnet wurde",
"inLibrary": "In der Bibliothek",
"inLibraryTooltip": "Diese Version befindet sich in Ihrer lokalen Bibliothek",
"downloaded": "Heruntergeladen",
"downloadedTooltip": "Diese Version wurde bereits heruntergeladen, befindet sich aber derzeit nicht in Ihrer Bibliothek",
"newer": "Neuere Version",
"newerTooltip": "Diese Version ist neuer als Ihre neueste lokale Version",
"earlyAccess": "Früher Zugriff",
"ignored": "Ignoriert"
"earlyAccessTooltip": "Für diese Version ist derzeit Civitai Early Access erforderlich",
"ignored": "Ignoriert",
"ignoredTooltip": "Für diese Version sind Update-Benachrichtigungen deaktiviert",
"onSiteOnly": "Nur On-Site",
"onSiteOnlyTooltip": "Diese Version ist nur für die On-Site-Generierung auf Civitai verfügbar"
},
"actions": {
"download": "Herunterladen",
"downloadTooltip": "Diese Version herunterladen",
"downloadEarlyAccessTooltip": "Diese Early-Access-Version von Civitai herunterladen",
"downloadNotAllowedTooltip": "Diese Version ist nur für die On-Site-Generierung auf Civitai verfügbar",
"delete": "Löschen",
"deleteTooltip": "Diese lokale Version löschen",
"ignore": "Ignorieren",
"unignore": "Ignorierung aufheben",
"ignoreTooltip": "Update-Benachrichtigungen für diese Version ignorieren",
"unignoreTooltip": "Update-Benachrichtigungen für diese Version fortsetzen",
"viewVersionOnCivitai": "Version auf Civitai anzeigen",
"earlyAccessTooltip": "Erfordert Early-Access-Kauf",
"resumeModelUpdates": "Aktualisierungen für dieses Modell fortsetzen",
"ignoreModelUpdates": "Aktualisierungen für dieses Modell ignorieren",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "Rezept im Workflow ersetzt",
"recipeFailedToSend": "Fehler beim Senden des Rezepts an den Workflow",
"noMatchingNodes": "Keine kompatiblen Knoten im aktuellen Workflow verfügbar",
"noTargetNodeSelected": "Kein Zielknoten ausgewählt"
"noTargetNodeSelected": "Kein Zielknoten ausgewählt",
"modelUpdated": "Modell im Workflow aktualisiert",
"modelFailed": "Fehler beim Aktualisieren des Modellknotens"
},
"nodeSelector": {
"recipe": "Rezept",
@@ -1235,6 +1464,10 @@
"opened": "Beispielbilder-Ordner geöffnet",
"openingFolder": "Beispielbilder-Ordner wird geöffnet",
"failedToOpen": "Fehler beim Öffnen des Beispielbilder-Ordners",
"copiedPath": "Pfad in Zwischenablage kopiert: {{path}}",
"clipboardFallback": "Pfad: {{path}}",
"copiedUri": "Link in Zwischenablage kopiert: {{uri}}",
"uriClipboardFallback": "Link: {{uri}}",
"setupRequired": "Beispielbilder-Speicher",
"setupDescription": "Um benutzerdefinierte Beispielbilder hinzuzufügen, müssen Sie zuerst einen Download-Speicherort festlegen.",
"setupUsage": "Dieser Pfad wird sowohl für heruntergeladene als auch für benutzerdefinierte Beispielbilder verwendet.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "WeChat QR-Code anzeigen",
"hideWechatQR": "WeChat QR-Code ausblenden"
},
"footer": "Vielen Dank, dass Sie LoRA Manager verwenden! ❤️"
"footer": "Vielen Dank, dass Sie LoRA Manager verwenden! ❤️",
"supporters": {
"title": "Danke an alle Unterstützer",
"subtitle": "Danke an {count} Unterstützer, die dieses Projekt möglich gemacht haben",
"specialThanks": "Besonderer Dank",
"allSupporters": "Alle Unterstützer",
"totalCount": "{count} Unterstützer insgesamt"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "Bitte wählen Sie eine Version aus",
"versionExists": "Diese Version existiert bereits in Ihrer Bibliothek",
"downloadCompleted": "Download erfolgreich abgeschlossen",
"downloadSkippedByBaseModel": "Download übersprungen, weil das Basismodell {baseModel} ausgeschlossen ist",
"autoOrganizeSuccess": "Automatische Organisation für {count} {type} erfolgreich abgeschlossen",
"autoOrganizePartialSuccess": "Automatische Organisation abgeschlossen: {success} verschoben, {failures} fehlgeschlagen von insgesamt {total} Modellen",
"autoOrganizeFailed": "Automatische Organisation fehlgeschlagen: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "Fehler beim Laden der {modelType}s: {message}",
"refreshComplete": "Aktualisierung abgeschlossen",
"refreshFailed": "Fehler beim Aktualisieren der Rezepte: {message}",
"syncComplete": "Synchronisation abgeschlossen",
"syncFailed": "Fehler beim Synchronisieren der Rezepte: {message}",
"updateFailed": "Fehler beim Aktualisieren des Rezepts: {error}",
"updateError": "Fehler beim Aktualisieren des Rezepts: {message}",
"nameSaved": "Rezept \"{name}\" erfolgreich gespeichert",
"nameUpdated": "Rezeptname erfolgreich aktualisiert",
"tagsUpdated": "Rezept-Tags erfolgreich aktualisiert",
"sourceUrlUpdated": "Quell-URL erfolgreich aktualisiert",
"promptUpdated": "Prompt erfolgreich aktualisiert",
"negativePromptUpdated": "Negativer Prompt erfolgreich aktualisiert",
"promptEditorHint": "Drücken Sie Enter zum Speichern, Shift+Enter für neue Zeile",
"noRecipeId": "Keine Rezept-ID verfügbar",
"sendToWorkflowFailed": "Fehler beim Senden des Rezepts an den Workflow: {message}",
"copyFailed": "Fehler beim Kopieren der Rezept-Syntax: {message}",
"noMissingLoras": "Keine fehlenden LoRAs zum Herunterladen",
"missingLorasInfoFailed": "Fehler beim Abrufen der Informationen für fehlende LoRAs",
@@ -1410,9 +1657,20 @@
"processingError": "Verarbeitungsfehler: {message}",
"folderBrowserError": "Fehler beim Laden des Ordner-Browsers: {message}",
"recipeSaveFailed": "Fehler beim Speichern des Rezepts: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "Import fehlgeschlagen: {message}",
"folderTreeFailed": "Fehler beim Laden des Ordnerbaums",
"folderTreeError": "Fehler beim Laden des Ordnerbaums"
"folderTreeError": "Fehler beim Laden des Ordnerbaums",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "Keine Rezepte ausgewählt",
"noMissingLorasInSelection": "Keine fehlenden LoRAs in ausgewählten Rezepten gefunden",
"noLoraRootConfigured": "Kein LoRA-Stammverzeichnis konfiguriert. Bitte legen Sie ein Standard-LoRA-Stammverzeichnis in den Einstellungen fest."
},
"models": {
"noModelsSelected": "Keine Modelle ausgewählt",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "Fehler beim Speichern der Basis-Modell-Zuordnungen: {message}",
"downloadTemplatesUpdated": "Download-Pfad-Vorlagen aktualisiert",
"downloadTemplatesFailed": "Fehler beim Speichern der Download-Pfad-Vorlagen: {message}",
"recipesPathUpdated": "Rezepte-Speicherpfad aktualisiert",
"recipesPathSaveFailed": "Fehler beim Aktualisieren des Rezepte-Speicherpfads: {message}",
"settingsUpdated": "Einstellungen aktualisiert: {setting}",
"compactModeToggled": "Kompakt-Modus {state}",
"settingSaveFailed": "Fehler beim Speichern der Einstellung: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "Konnte trainierte Wörter nicht laden",
"tooLong": "Trigger Word sollte 100 Wörter nicht überschreiten",
"tooMany": "Maximal 30 Trigger Words erlaubt",
"tooLong": "Trigger Word sollte 500 Wörter nicht überschreiten",
"tooMany": "Maximal 100 Trigger Words erlaubt",
"alreadyExists": "Dieses Trigger Word existiert bereits",
"updateSuccess": "Trigger Words erfolgreich aktualisiert",
"updateFailed": "Fehler beim Aktualisieren der Trigger Words",
@@ -1591,6 +1851,8 @@
"deleteFailed": "Fehler beim Löschen von {type}: {message}",
"excludeSuccess": "{type} erfolgreich ausgeschlossen",
"excludeFailed": "Fehler beim Ausschließen von {type}: {message}",
"restoreSuccess": "{type} erfolgreich wiederhergestellt",
"restoreFailed": "{type} konnte nicht wiederhergestellt werden: {message}",
"fileNameUpdated": "Dateiname erfolgreich aktualisiert",
"fileRenameFailed": "Fehler beim Umbenennen der Datei: {error}",
"previewUpdated": "Vorschau erfolgreich aktualisiert",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "Systemdiagnose",
"title": "Doktor",
"buttonTitle": "Diagnose und häufige Fehlerbehebungen ausführen",
"loading": "Umgebung wird geprüft...",
"footer": "Exportiere ein Diagnosepaket, falls das Problem nach der Reparatur weiterhin besteht.",
"summary": {
"idle": "Führe eine Überprüfung von Einstellungen, Cache-Integrität und UI-Konsistenz durch.",
"ok": "Keine aktiven Probleme wurden in der aktuellen Umgebung gefunden.",
"warning": "{count} Problem(e) wurden gefunden. Die meisten lassen sich direkt über dieses Panel beheben.",
"error": "Bevor die App vollständig fehlerfrei ist, müssen {count} Problem(e) behoben werden."
},
"status": {
"ok": "Gesund",
"warning": "Handlungsbedarf",
"error": "Aktion erforderlich"
},
"actions": {
"runAgain": "Erneut ausführen",
"exportBundle": "Paket exportieren"
},
"toast": {
"loadFailed": "Diagnose konnte nicht geladen werden: {message}",
"repairSuccess": "Cache-Neuaufbau abgeschlossen.",
"repairFailed": "Cache-Neuaufbau fehlgeschlagen: {message}",
"exportSuccess": "Diagnosepaket exportiert.",
"exportFailed": "Export des Diagnosepakets fehlgeschlagen: {message}",
"conflictsResolved": "{count} Dateinamenskonflikt(e) gelöst.",
"conflictsResolveFailed": "Auflösung der Dateinamenskonflikte fehlgeschlagen: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "Anwendungs-Update erkannt",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "Cancel",
"confirm": "Confirm",
"actions": {
"save": "Save",
"cancel": "Cancel",
"confirm": "Confirm",
"delete": "Delete",
"move": "Move",
"refresh": "Refresh",
@@ -11,7 +14,9 @@
"backToTop": "Back to top",
"settings": "Settings",
"help": "Help",
"add": "Add"
"add": "Add",
"close": "Close",
"menu": "Menu"
},
"status": {
"loading": "Loading...",
@@ -171,6 +176,9 @@
"success": "Successfully repaired {count} recipes.",
"cancelled": "Repair cancelled. {count} recipes were repaired.",
"error": "Recipe repair failed: {message}"
},
"manageExcludedModels": {
"label": "Manage Excluded Models"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "Preset \"{name}\" already exists. Overwrite?",
"presetNamePlaceholder": "Preset name...",
"baseModel": "Base Model",
"baseModelSearchPlaceholder": "Search base models...",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
"license": "License",
"noCreditRequired": "No Credit Required",
"allowSellingGeneratedContent": "Allow Selling",
"noTags": "No tags",
"noBaseModelMatches": "No base models match the current search.",
"clearAll": "Clear All Filters",
"any": "Any",
"all": "All",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai API Key",
"civitaiApiKeyPlaceholder": "Enter your Civitai API key",
"civitaiApiKeyHelp": "Used for authentication when downloading models from Civitai",
"civitaiHost": {
"label": "Civitai host",
"help": "Choose which Civitai site opens when using View on Civitai links.",
"options": {
"com": "civitai.com (SFW)",
"red": "civitai.red (unrestricted)"
}
},
"downloadBackend": {
"label": "Download backend",
"help": "Choose how model files are downloaded. Python uses the built-in downloader. aria2 uses the experimental external downloader process.",
"options": {
"python": "Python (built-in)",
"aria2": "aria2 (experimental)"
}
},
"aria2cPath": {
"label": "aria2c path",
"help": "Optional path to the aria2c executable. Leave empty to use aria2c from your system PATH.",
"placeholder": "Leave empty to use aria2c from PATH"
},
"aria2HelpLink": "Learn how to set up the aria2 download backend",
"civitaiHostBanner": {
"title": "Civitai host preference available",
"content": "Civitai now uses civitai.com for SFW content and civitai.red for unrestricted content. You can change which site opens by default in Settings.",
"openSettings": "Open Settings"
},
"openSettingsFileLocation": {
"label": "Open settings folder",
"tooltip": "Open folder containing settings.json",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "Content Filtering",
"downloads": "Downloads",
"videoSettings": "Video Settings",
"layoutSettings": "Layout Settings",
"misc": "Miscellaneous",
"backup": "Backups",
"folderSettings": "Default Roots",
"recipeSettings": "Recipes",
"extraFolderPaths": "Extra Folder Paths",
"downloadPathTemplates": "Download Path Templates",
"priorityTags": "Priority Tags",
@@ -287,7 +327,15 @@
"blurNsfwContent": "Blur NSFW Content",
"blurNsfwContentHelp": "Blur mature (NSFW) content preview images",
"showOnlySfw": "Show Only SFW Results",
"showOnlySfwHelp": "Filter out all NSFW content when browsing and searching"
"showOnlySfwHelp": "Filter out all NSFW content when browsing and searching",
"matureBlurThreshold": "Mature Blur Threshold",
"matureBlurThresholdHelp": "Set which rating level starts blur filtering when NSFW blur is enabled.",
"matureBlurThresholdOptions": {
"pg13": "PG13 and above",
"r": "R and above (default)",
"x": "X and above",
"xxx": "XXX only"
}
},
"videoSettings": {
"autoplayOnHover": "Autoplay Videos on Hover",
@@ -311,6 +359,54 @@
"saveFailed": "Unable to save skip paths: {message}"
}
},
"backup": {
"autoEnabled": "Automatic backups",
"autoEnabledHelp": "Create a local snapshot once per day and keep the latest snapshots according to the retention policy.",
"retention": "Retention count",
"retentionHelp": "How many automatic snapshots to keep before older ones are pruned.",
"management": "Backup management",
"managementHelp": "Export your current user state or restore it from a backup archive.",
"scopeHelp": "Backs up your settings, download history, and model update state. It does not include model files or rebuildable caches.",
"locationSummary": "Current backup location",
"openFolderButton": "Open backup folder",
"openFolderSuccess": "Opened backup folder",
"openFolderFailed": "Failed to open backup folder",
"locationCopied": "Backup path copied to clipboard: {{path}}",
"locationClipboardFallback": "Backup path: {{path}}",
"exportButton": "Export backup",
"exportSuccess": "Backup exported successfully.",
"exportFailed": "Failed to export backup: {message}",
"importButton": "Import backup",
"importConfirm": "Import this backup and overwrite local user state?",
"importSuccess": "Backup imported successfully.",
"importFailed": "Failed to import backup: {message}",
"latestSnapshot": "Latest snapshot",
"latestAutoSnapshot": "Latest automatic snapshot",
"snapshotCount": "Saved snapshots",
"noneAvailable": "No snapshots yet"
},
"downloadSkipBaseModels": {
"label": "Skip downloads for base models",
"help": "When enabled, versions using the selected base models will be skipped.",
"searchPlaceholder": "Filter base models...",
"empty": "No base models match the current search.",
"summary": {
"none": "None selected",
"count": "{count} selected"
},
"actions": {
"edit": "Edit",
"collapse": "Collapse",
"clear": "Clear"
},
"validation": {
"saveFailed": "Unable to save excluded base models: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "Skip previously downloaded model versions",
"help": "When enabled, versions downloaded before will be skipped."
},
"layoutSettings": {
"displayDensity": "Display Density",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "Reveal on Hover"
},
"cardInfoDisplayHelp": "Choose when to display model information and action buttons",
"showVersionOnCard": "Show Version on Card",
"showVersionOnCardHelp": "Show or hide the version name on model cards",
"modelCardFooterAction": "Model Card Button Action",
"modelCardFooterActionOptions": {
"exampleImages": "Open Example Images",
@@ -359,12 +457,16 @@
"defaultUnetRootHelp": "Set default diffusion model (UNET) root directory for downloads, imports and moves",
"defaultEmbeddingRoot": "Embedding Root",
"defaultEmbeddingRootHelp": "Set default embedding root directory for downloads, imports and moves",
"recipesPath": "Recipes Storage Path",
"recipesPathHelp": "Optional custom directory for stored recipes. Leave empty to use the first LoRA root's recipes folder.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "Migrating recipes storage...",
"noDefault": "No Default"
},
"extraFolderPaths": {
"title": "Extra Folder Paths",
"help": "Add additional model folders outside of ComfyUI's standard paths. These paths are stored separately and scanned alongside the default folders.",
"description": "Configure additional folders to scan for models. These paths are specific to LoRA Manager and will be merged with ComfyUI's default paths.",
"description": "Additional model root paths exclusive to LoRA Manager. Load models from locations outside ComfyUI's standard folders—ideal for large libraries that would otherwise slow down ComfyUI.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "LoRA Paths",
"checkpoint": "Checkpoint Paths",
@@ -372,7 +474,7 @@
"embedding": "Embedding Paths"
},
"pathPlaceholder": "/path/to/extra/models",
"saveSuccess": "Extra folder paths updated.",
"saveSuccess": "Extra folder paths updated. Restart required to apply changes.",
"saveError": "Failed to update extra folder paths: {message}",
"validation": {
"duplicatePath": "This path is already configured"
@@ -440,6 +542,21 @@
"downloadLocationHelp": "Enter the folder path where example images from Civitai will be saved",
"autoDownload": "Auto Download Example Images",
"autoDownloadHelp": "Automatically download example images for models that don't have them (requires download location to be set)",
"openMode": "Open Example Images Action",
"openModeHelp": "Choose whether the action opens on the server, copies a mapped local path, or launches a custom URI.",
"openModeOptions": {
"system": "Open on server",
"clipboard": "Copy local path",
"uriTemplate": "Open custom URI"
},
"localRoot": "Local Example Images Root",
"localRootHelp": "Optional local or mounted root that mirrors the server example images directory. If blank, the server path is reused.",
"localRootPlaceholder": "Example: /Volumes/ComfyUI/example_images",
"uriTemplate": "Open URI Template",
"uriTemplateHelp": "Use a custom deep link such as a file URI or a Shortcuts link.",
"uriTemplatePlaceholder": "Example: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "Available placeholders: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "Learn more about remote open modes",
"optimizeImages": "Optimize Downloaded Images",
"optimizeImagesHelp": "Optimize example images to reduce file size and improve loading speed (metadata will be preserved)",
"download": "Download",
@@ -570,7 +687,8 @@
"autoOrganize": "Auto-Organize Selected",
"skipMetadataRefresh": "Skip Metadata Refresh for Selected",
"resumeMetadataRefresh": "Resume Metadata Refresh for Selected",
"deleteAll": "Delete Selected Models",
"deleteAll": "Delete Selected",
"downloadMissingLoras": "Download Missing LoRAs",
"clear": "Clear Selection",
"skipMetadataRefreshCount": "Skip ({count} models)",
"resumeMetadataRefreshCount": "Resume ({count} models)",
@@ -600,6 +718,7 @@
"moveToFolder": "Move to Folder",
"repairMetadata": "Repair metadata",
"excludeModel": "Exclude Model",
"restoreModel": "Restore Model",
"deleteModel": "Delete Model",
"shareRecipe": "Share Recipe",
"viewAllLoras": "View All LoRAs",
@@ -618,9 +737,9 @@
"title": "Import a recipe from image or URL",
"urlLocalPath": "URL / Local Path",
"uploadImage": "Upload Image",
"urlSectionDescription": "Input a Civitai image URL or local file path to import as a recipe.",
"urlSectionDescription": "Input a Civitai image URL from civitai.com or civitai.red, or a local file path, to import as a recipe.",
"imageUrlOrPath": "Image URL or File Path:",
"urlPlaceholder": "https://civitai.com/images/... or C:/path/to/image.png",
"urlPlaceholder": "https://civitai.com/images/... or https://civitai.red/images/... or C:/path/to/image.png",
"fetchImage": "Fetch Image",
"uploadSectionDescription": "Upload an image with LoRA metadata to import as a recipe.",
"selectImage": "Select Image",
@@ -641,6 +760,8 @@
"root": "Root",
"browseFolders": "Browse Folders:",
"downloadAndSaveRecipe": "Download & Save Recipe",
"importRecipeOnly": "Import Recipe Only",
"importAndDownload": "Import & Download",
"downloadMissingLoras": "Download Missing LoRAs",
"saveRecipe": "Save Recipe",
"loraCountInfo": "({existing}/{total} in library)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "Least"
},
"refresh": {
"title": "Refresh recipe list"
"title": "Refresh recipe list",
"quick": "Sync Changes",
"quickTooltip": "Sync changes - quick refresh without rebuilding cache",
"full": "Rebuild Cache",
"fullTooltip": "Rebuild cache - full rescan of all recipe files"
},
"filteredByLora": "Filtered by LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "Failed to repair recipe: {message}",
"missingId": "Cannot repair recipe: Missing recipe ID"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "Move to {otherType} Folder"
"moveToOtherTypeFolder": "Move to {otherType} Folder",
"sendToWorkflow": "Send to Workflow"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "Unpin Sidebar",
"switchToListView": "Switch to List View",
"switchToTreeView": "Switch to Tree View",
"recursiveOn": "Search subfolders",
"recursiveOff": "Search current folder only",
"recursiveOn": "Include subfolders",
"recursiveOff": "Current folder only",
"recursiveUnavailable": "Recursive search is available in tree view only",
"collapseAllDisabled": "Not available in list view",
"dragDrop": {
"unableToResolveRoot": "Unable to determine destination path for move.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Move is not supported for this item.",
"createFolderHint": "Release to create new folder",
"newFolderName": "New folder name",
"folderNameHint": "Press Enter to confirm, Escape to cancel",
"emptyFolderName": "Please enter a folder name",
"invalidFolderName": "Folder name contains invalid characters",
"noDragState": "No pending drag operation found"
},
"empty": {
"noFolders": "No folders found",
"dragHint": "Drag items here to create folders"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "Early Access",
"earlyAccessTooltip": "Early access required",
"inLibrary": "In Library",
"downloaded": "Downloaded",
"downloadedTooltip": "Previously downloaded, but it is not currently in your library.",
"alreadyInLibrary": "Already in Library",
"autoOrganizedPath": "[Auto-organized by path template]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "Update Base Model",
"cancel": "Cancel"
},
"bulkDownloadMissingLoras": {
"title": "Download Missing LoRAs",
"message": "Found {uniqueCount} unique missing LoRAs (from {totalCount} total across selected recipes).",
"previewTitle": "LoRAs to download:",
"moreItems": "...and {count} more",
"note": "Files will be downloaded using default path templates. This may take a while depending on the number of LoRAs.",
"downloadButton": "Download {count} LoRA(s)"
},
"exampleAccess": {
"title": "Local Example Images",
"message": "No local example images found for this model. View options:",
@@ -938,9 +1142,9 @@
},
"proceedText": "Only proceed if you're sure this is what you want.",
"urlLabel": "Civitai Model URL:",
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676 or https://civitai.red/models/649516/model-name?modelVersionId=726676",
"helpText": {
"title": "Paste any Civitai model URL. Supported formats:",
"title": "Paste any Civitai model URL from civitai.com or civitai.red. Supported formats:",
"format1": "https://civitai.com/models/649516",
"format2": "https://civitai.com/models/649516?modelVersionId=726676",
"format3": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "View on Civitai",
"viewOnCivitaiText": "View on Civitai",
"viewCreatorProfile": "View Creator Profile",
"openFileLocation": "Open File Location"
"openFileLocation": "Open File Location",
"sendToWorkflow": "Send to ComfyUI",
"sendToWorkflowText": "Send to ComfyUI"
},
"openFileLocation": {
"success": "File location opened successfully",
@@ -964,6 +1170,9 @@
"copied": "Path copied to clipboard: {{path}}",
"clipboardFallback": "Path: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "Unable to send to ComfyUI: No file path available"
},
"metadata": {
"version": "Version",
"fileName": "File Name",
@@ -1000,6 +1209,8 @@
"cancel": "Cancel editing",
"save": "Save changes",
"addPlaceholder": "Type to add or click suggestions below",
"editWord": "Edit trigger word",
"editPlaceholder": "Edit trigger word",
"copyWord": "Copy trigger word",
"deleteWord": "Delete trigger word",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "in {count}d"
},
"badges": {
"current": "Current Version",
"current": "Opened Version",
"currentTooltip": "This is the version you opened this modal from",
"inLibrary": "In Library",
"inLibraryTooltip": "This version exists in your local library",
"downloaded": "Downloaded",
"downloadedTooltip": "This version was downloaded before, but is not currently in your library",
"newer": "Newer Version",
"newerTooltip": "This version is newer than your latest local version",
"earlyAccess": "Early Access",
"ignored": "Ignored"
"earlyAccessTooltip": "This version currently requires Civitai early access",
"ignored": "Ignored",
"ignoredTooltip": "Update notifications are disabled for this version",
"onSiteOnly": "On-Site Only",
"onSiteOnlyTooltip": "This version is only available for on-site generation on Civitai"
},
"actions": {
"download": "Download",
"downloadTooltip": "Download this version",
"downloadEarlyAccessTooltip": "Download this early access version from Civitai",
"downloadNotAllowedTooltip": "This version is only available for on-site generation on Civitai",
"delete": "Delete",
"deleteTooltip": "Delete this local version",
"ignore": "Ignore",
"unignore": "Unignore",
"ignoreTooltip": "Ignore update notifications for this version",
"unignoreTooltip": "Resume update notifications for this version",
"viewVersionOnCivitai": "View version on Civitai",
"earlyAccessTooltip": "Requires early access purchase",
"resumeModelUpdates": "Resume updates for this model",
"ignoreModelUpdates": "Ignore updates for this model",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "Recipe replaced in workflow",
"recipeFailedToSend": "Failed to send recipe to workflow",
"noMatchingNodes": "No compatible nodes available in the current workflow",
"noTargetNodeSelected": "No target node selected"
"noTargetNodeSelected": "No target node selected",
"modelUpdated": "Model updated in workflow",
"modelFailed": "Failed to update model node"
},
"nodeSelector": {
"recipe": "Recipe",
@@ -1235,6 +1464,10 @@
"opened": "Example images folder opened",
"openingFolder": "Opening example images folder",
"failedToOpen": "Failed to open example images folder",
"copiedPath": "Path copied to clipboard: {{path}}",
"clipboardFallback": "Path: {{path}}",
"copiedUri": "Link copied to clipboard: {{uri}}",
"uriClipboardFallback": "Link: {{uri}}",
"setupRequired": "Example Images Storage",
"setupDescription": "To add custom example images, you need to set a download location first.",
"setupUsage": "This path is used for both downloaded and custom example images.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "Show WeChat QR Code",
"hideWechatQR": "Hide WeChat QR Code"
},
"footer": "Thank you for using LoRA Manager! ❤️"
"footer": "Thank you for using LoRA Manager! ❤️",
"supporters": {
"title": "Thank You To Our Supporters",
"subtitle": "Thanks to {count} supporters who made this project possible",
"specialThanks": "Special Thanks",
"allSupporters": "All Supporters",
"totalCount": "{count} supporters in total"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "Please select a version",
"versionExists": "This version already exists in your library",
"downloadCompleted": "Download completed successfully",
"downloadSkippedByBaseModel": "Skipped download because base model {baseModel} is excluded",
"autoOrganizeSuccess": "Auto-organize completed successfully for {count} {type}",
"autoOrganizePartialSuccess": "Auto-organize completed with {success} moved, {failures} failed out of {total} models",
"autoOrganizeFailed": "Auto-organize failed: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "Failed to load {modelType}s: {message}",
"refreshComplete": "Refresh complete",
"refreshFailed": "Failed to refresh recipes: {message}",
"syncComplete": "Sync complete",
"syncFailed": "Failed to sync recipes: {message}",
"updateFailed": "Failed to update recipe: {error}",
"updateError": "Error updating recipe: {message}",
"nameSaved": "Recipe \"{name}\" saved successfully",
"nameUpdated": "Recipe name updated successfully",
"tagsUpdated": "Recipe tags updated successfully",
"sourceUrlUpdated": "Source URL updated successfully",
"promptUpdated": "Prompt updated successfully",
"negativePromptUpdated": "Negative prompt updated successfully",
"promptEditorHint": "Press Enter to save, Shift+Enter for new line",
"noRecipeId": "No recipe ID available",
"sendToWorkflowFailed": "Failed to send recipe to workflow: {message}",
"copyFailed": "Error copying recipe syntax: {message}",
"noMissingLoras": "No missing LoRAs to download",
"missingLorasInfoFailed": "Failed to get information for missing LoRAs",
@@ -1410,9 +1657,20 @@
"processingError": "Processing error: {message}",
"folderBrowserError": "Error loading folder browser: {message}",
"recipeSaveFailed": "Failed to save recipe: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "Import failed: {message}",
"folderTreeFailed": "Failed to load folder tree",
"folderTreeError": "Error loading folder tree"
"folderTreeError": "Error loading folder tree",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "No recipes selected",
"noMissingLorasInSelection": "No missing LoRAs found in selected recipes",
"noLoraRootConfigured": "No LoRA root directory configured. Please set a default LoRA root in settings."
},
"models": {
"noModelsSelected": "No models selected",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "Failed to save base model mappings: {message}",
"downloadTemplatesUpdated": "Download path templates updated",
"downloadTemplatesFailed": "Failed to save download path templates: {message}",
"recipesPathUpdated": "Recipes storage path updated",
"recipesPathSaveFailed": "Failed to update recipes storage path: {message}",
"settingsUpdated": "Settings updated: {setting}",
"compactModeToggled": "Compact Mode {state}",
"settingSaveFailed": "Failed to save setting: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "Could not load trained words",
"tooLong": "Trigger word should not exceed 100 words",
"tooMany": "Maximum 30 trigger words allowed",
"tooLong": "Trigger word should not exceed 500 words",
"tooMany": "Maximum 100 trigger words allowed",
"alreadyExists": "This trigger word already exists",
"updateSuccess": "Trigger words updated successfully",
"updateFailed": "Failed to update trigger words",
@@ -1591,6 +1851,8 @@
"deleteFailed": "Failed to delete {type}: {message}",
"excludeSuccess": "{type} excluded successfully",
"excludeFailed": "Failed to exclude {type}: {message}",
"restoreSuccess": "{type} restored successfully",
"restoreFailed": "Failed to restore {type}: {message}",
"fileNameUpdated": "File name updated successfully",
"fileRenameFailed": "Failed to rename file: {error}",
"previewUpdated": "Preview updated successfully",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "System diagnostics",
"title": "Doctor",
"buttonTitle": "Run diagnostics and common fixes",
"loading": "Checking environment...",
"footer": "Export a diagnostics bundle if the issue still persists after repair.",
"summary": {
"idle": "Run a health check for settings, cache integrity, and UI consistency.",
"ok": "No active issues were found in the current environment.",
"warning": "{count} issue(s) were found. Most can be fixed directly from this panel.",
"error": "{count} issue(s) need attention before the app is fully healthy."
},
"status": {
"ok": "Healthy",
"warning": "Needs Attention",
"error": "Action Required"
},
"actions": {
"runAgain": "Run Again",
"exportBundle": "Export Bundle"
},
"toast": {
"loadFailed": "Failed to load diagnostics: {message}",
"repairSuccess": "Cache rebuild completed.",
"repairFailed": "Cache rebuild failed: {message}",
"exportSuccess": "Diagnostics bundle exported.",
"exportFailed": "Failed to export diagnostics bundle: {message}",
"conflictsResolved": "{count} filename conflict(s) resolved.",
"conflictsResolveFailed": "Failed to resolve filename conflicts: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "Application Update Detected",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "Cancelar",
"confirm": "Confirmar",
"actions": {
"save": "Guardar",
"cancel": "Cancelar",
"confirm": "Confirmar",
"delete": "Eliminar",
"move": "Mover",
"refresh": "Actualizar",
@@ -11,7 +14,9 @@
"backToTop": "Volver arriba",
"settings": "Configuración",
"help": "Ayuda",
"add": "Añadir"
"add": "Añadir",
"close": "Cerrar",
"menu": "Menú"
},
"status": {
"loading": "Cargando...",
@@ -171,6 +176,9 @@
"success": "Se repararon con éxito {count} recetas.",
"cancelled": "Reparación cancelada. {count} recetas fueron reparadas.",
"error": "Error al reparar recetas: {message}"
},
"manageExcludedModels": {
"label": "Gestionar modelos excluidos"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "El preset \"{name}\" ya existe. ¿Sobrescribir?",
"presetNamePlaceholder": "Nombre del preajuste...",
"baseModel": "Modelo base",
"baseModelSearchPlaceholder": "Buscar modelos base...",
"modelTags": "Etiquetas (Top 20)",
"modelTypes": "Model Types",
"modelTypes": "Tipos de modelos",
"license": "Licencia",
"noCreditRequired": "Sin crédito requerido",
"allowSellingGeneratedContent": "Venta permitida",
"noTags": "Sin etiquetas",
"noBaseModelMatches": "Ningún modelo base coincide con la búsqueda actual.",
"clearAll": "Limpiar todos los filtros",
"any": "Cualquiera",
"all": "Todos",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Clave API de Civitai",
"civitaiApiKeyPlaceholder": "Introduce tu clave API de Civitai",
"civitaiApiKeyHelp": "Utilizada para autenticación al descargar modelos de Civitai",
"civitaiHost": {
"label": "Host de Civitai",
"help": "Elige qué sitio de Civitai se abre al usar los enlaces de \"View on Civitai\".",
"options": {
"com": "civitai.com (solo SFW)",
"red": "civitai.red (sin restricciones)"
}
},
"downloadBackend": {
"label": "Backend de descarga",
"help": "Elige cómo se descargan los archivos del modelo. Python usa el descargador integrado. aria2 usa el proceso externo experimental de descarga.",
"options": {
"python": "Python (integrado)",
"aria2": "aria2 (experimental)"
}
},
"aria2cPath": {
"label": "Ruta de aria2c",
"help": "Ruta opcional al ejecutable aria2c. Déjalo vacío para usar aria2c desde el PATH del sistema.",
"placeholder": "Déjalo vacío para usar aria2c desde el PATH"
},
"aria2HelpLink": "Aprende a configurar el backend de descarga aria2",
"civitaiHostBanner": {
"title": "Preferencia de host de Civitai disponible",
"content": "Civitai ahora usa civitai.com para contenido SFW y civitai.red para contenido sin restricciones. Puedes cambiar en Ajustes qué sitio se abre por defecto.",
"openSettings": "Abrir ajustes"
},
"openSettingsFileLocation": {
"label": "Abrir carpeta de ajustes",
"tooltip": "Abrir la carpeta que contiene settings.json",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "Filtrado de contenido",
"downloads": "Descargas",
"videoSettings": "Configuración de video",
"layoutSettings": "Configuración de diseño",
"misc": "Varios",
"backup": "Copias de seguridad",
"folderSettings": "Raíces predeterminadas",
"recipeSettings": "Recetas",
"extraFolderPaths": "Rutas de carpetas adicionales",
"downloadPathTemplates": "Plantillas de rutas de descarga",
"priorityTags": "Etiquetas prioritarias",
@@ -287,7 +327,15 @@
"blurNsfwContent": "Difuminar contenido NSFW",
"blurNsfwContentHelp": "Difuminar imágenes de vista previa de contenido para adultos (NSFW)",
"showOnlySfw": "Mostrar solo resultados SFW",
"showOnlySfwHelp": "Filtrar todo el contenido NSFW al navegar y buscar"
"showOnlySfwHelp": "Filtrar todo el contenido NSFW al navegar y buscar",
"matureBlurThreshold": "Umbral de difuminado para contenido adulto",
"matureBlurThresholdHelp": "Establecer a partir de qué nivel de clasificación comienza el filtrado por difuminado cuando el difuminado NSFW está habilitado.",
"matureBlurThresholdOptions": {
"pg13": "PG13 y superior",
"r": "R y superior (predeterminado)",
"x": "X y superior",
"xxx": "Solo XXX"
}
},
"videoSettings": {
"autoplayOnHover": "Reproducir videos automáticamente al pasar el ratón",
@@ -311,6 +359,54 @@
"saveFailed": "No se pudieron guardar las rutas a omitir: {message}"
}
},
"backup": {
"autoEnabled": "Copias de seguridad automáticas",
"autoEnabledHelp": "Crea una instantánea local una vez al día y conserva las más recientes según la política de retención.",
"retention": "Cantidad de retención",
"retentionHelp": "Cuántas instantáneas automáticas conservar antes de eliminar las antiguas.",
"management": "Gestión de copias",
"managementHelp": "Exporta tu estado de usuario actual o restáuralo desde un archivo de copia de seguridad.",
"scopeHelp": "Incluye tu configuración, el historial de descargas y el estado de actualización de los modelos. No incluye los archivos de modelo ni las cachés que se pueden regenerar.",
"locationSummary": "Ubicación actual de la copia",
"openFolderButton": "Abrir carpeta de copias",
"openFolderSuccess": "Carpeta de copias abierta",
"openFolderFailed": "No se pudo abrir la carpeta de copias",
"locationCopied": "Ruta de la copia copiada al portapapeles: {{path}}",
"locationClipboardFallback": "Ruta de la copia: {{path}}",
"exportButton": "Exportar copia",
"exportSuccess": "Copia exportada correctamente.",
"exportFailed": "No se pudo exportar la copia: {message}",
"importButton": "Importar copia",
"importConfirm": "¿Importar esta copia y sobrescribir el estado local del usuario?",
"importSuccess": "Copia importada correctamente.",
"importFailed": "No se pudo importar la copia: {message}",
"latestSnapshot": "Última instantánea",
"latestAutoSnapshot": "Última instantánea automática",
"snapshotCount": "Instantáneas guardadas",
"noneAvailable": "Aún no hay instantáneas"
},
"downloadSkipBaseModels": {
"label": "Omitir descargas para modelos base",
"help": "Se aplica a todos los flujos de descarga. Aquí solo se pueden seleccionar modelos base compatibles.",
"searchPlaceholder": "Filtrar modelos base...",
"empty": "Ningún modelo base coincide con la búsqueda actual.",
"summary": {
"none": "Ninguno seleccionado",
"count": "{count} seleccionados"
},
"actions": {
"edit": "Editar",
"collapse": "Contraer",
"clear": "Limpiar"
},
"validation": {
"saveFailed": "No se pudieron guardar los modelos base excluidos: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "Omitir versiones de modelos previamente descargadas",
"help": "Cuando está habilitado, LoRA Manager omitirá la descarga de una versión de modelo si el servicio de historial de descargas registra esa versión exacta como ya descargada. Aplica a todos los flujos de descarga."
},
"layoutSettings": {
"displayDensity": "Densidad de visualización",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "Mostrar al pasar el ratón"
},
"cardInfoDisplayHelp": "Elige cuándo mostrar información del modelo y botones de acción",
"showVersionOnCard": "Mostrar versión en la tarjeta",
"showVersionOnCardHelp": "Mostrar u ocultar el nombre de versión en las tarjetas de modelo",
"modelCardFooterAction": "Acción del botón de tarjeta de modelo",
"modelCardFooterActionOptions": {
"exampleImages": "Abrir imágenes de ejemplo",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "Establecer el directorio raíz predeterminado de Diffusion Model (UNET) para descargas, importaciones y movimientos",
"defaultEmbeddingRoot": "Raíz de embedding",
"defaultEmbeddingRootHelp": "Establecer el directorio raíz predeterminado de embedding para descargas, importaciones y movimientos",
"recipesPath": "Ruta de almacenamiento de recetas",
"recipesPathHelp": "Directorio personalizado opcional para las recetas guardadas. Déjalo vacío para usar la carpeta recipes del primer directorio raíz de LoRA.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "Migrando el almacenamiento de recetas...",
"noDefault": "Sin predeterminado"
},
"extraFolderPaths": {
"title": "Rutas de carpetas adicionales",
"description": "Rutas raíz de modelos adicionales exclusivas para LoRA Manager. Cargue modelos desde ubicaciones fuera de las carpetas estándar de ComfyUI, ideal para bibliotecas grandes que de otro modo ralentizarían ComfyUI.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "Rutas de LoRA",
"checkpoint": "Rutas de Checkpoint",
"unet": "Rutas de modelo de difusión",
"embedding": "Rutas de Embedding"
},
"pathPlaceholder": "/ruta/a/modelos/extra",
"saveSuccess": "Rutas de carpetas adicionales actualizadas. Se requiere reinicio para aplicar los cambios.",
"saveError": "Error al actualizar las rutas de carpetas adicionales: {message}",
"validation": {
"duplicatePath": "Esta ruta ya está configurada"
}
},
"priorityTags": {
"title": "Etiquetas prioritarias",
"description": "Personaliza el orden de prioridad de etiquetas para cada tipo de modelo (p. ej., character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Introduce la ruta de la carpeta donde se guardarán las imágenes de ejemplo de Civitai",
"autoDownload": "Descargar automáticamente imágenes de ejemplo",
"autoDownloadHelp": "Descargar automáticamente imágenes de ejemplo para modelos que no las tengan (requiere que se establezca la ubicación de descarga)",
"openMode": "Acción al abrir imágenes de ejemplo",
"openModeHelp": "Elige si la acción se abre en el servidor, copia una ruta local asignada o lanza una URI personalizada.",
"openModeOptions": {
"system": "Abrir en el servidor",
"clipboard": "Copiar ruta local",
"uriTemplate": "Abrir URI personalizada"
},
"localRoot": "Raíz local de imágenes de ejemplo",
"localRootHelp": "Raíz local u montada opcional que refleja el directorio de imágenes de ejemplo del servidor. Si se deja en blanco, se reutiliza la ruta del servidor.",
"localRootPlaceholder": "Ejemplo: /Volumes/ComfyUI/example_images",
"uriTemplate": "Abrir plantilla de URI",
"uriTemplateHelp": "Usa un enlace profundo personalizado, como un URI de archivo o un enlace de Shortcuts.",
"uriTemplatePlaceholder": "Ejemplo: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "Marcadores disponibles: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "Más información sobre los modos de apertura remota",
"optimizeImages": "Optimizar imágenes descargadas",
"optimizeImagesHelp": "Optimizar imágenes de ejemplo para reducir el tamaño del archivo y mejorar la velocidad de carga (se preservarán los metadatos)",
"download": "Descargar",
@@ -485,23 +619,6 @@
"proxyPassword": "Contraseña (opcional)",
"proxyPasswordPlaceholder": "contraseña",
"proxyPasswordHelp": "Contraseña para autenticación de proxy (si es necesario)"
},
"extraFolderPaths": {
"title": "Rutas de carpetas adicionales",
"help": "Agregue carpetas de modelos adicionales fuera de las rutas estándar de ComfyUI. Estas rutas se almacenan por separado y se escanean junto con las carpetas predeterminadas.",
"description": "Configure carpetas adicionales para escanear modelos. Estas rutas son específicas de LoRA Manager y se fusionarán con las rutas predeterminadas de ComfyUI.",
"modelTypes": {
"lora": "Rutas de LoRA",
"checkpoint": "Rutas de Checkpoint",
"unet": "Rutas de modelo de difusión",
"embedding": "Rutas de Embedding"
},
"pathPlaceholder": "/ruta/a/modelos/extra",
"saveSuccess": "Rutas de carpetas adicionales actualizadas.",
"saveError": "Error al actualizar las rutas de carpetas adicionales: {message}",
"validation": {
"duplicatePath": "Esta ruta ya está configurada"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "Auto-organizar seleccionados",
"skipMetadataRefresh": "Omitir actualización de metadatos para seleccionados",
"resumeMetadataRefresh": "Reanudar actualización de metadatos para seleccionados",
"deleteAll": "Eliminar todos los modelos",
"deleteAll": "Eliminar seleccionados",
"downloadMissingLoras": "Descargar LoRAs faltantes",
"clear": "Limpiar selección",
"skipMetadataRefreshCount": "Omitir{count} modelos",
"resumeMetadataRefreshCount": "Reanudar{count} modelos",
@@ -600,6 +718,7 @@
"moveToFolder": "Mover a carpeta",
"repairMetadata": "Reparar metadatos",
"excludeModel": "Excluir modelo",
"restoreModel": "Restaurar modelo",
"deleteModel": "Eliminar modelo",
"shareRecipe": "Compartir receta",
"viewAllLoras": "Ver todos los LoRAs",
@@ -641,6 +760,8 @@
"root": "Raíz",
"browseFolders": "Explorar carpetas:",
"downloadAndSaveRecipe": "Descargar y guardar receta",
"importRecipeOnly": "Importar solo la receta",
"importAndDownload": "Importar y descargar",
"downloadMissingLoras": "Descargar LoRAs faltantes",
"saveRecipe": "Guardar receta",
"loraCountInfo": "({existing}/{total} en la biblioteca)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "Menos"
},
"refresh": {
"title": "Actualizar lista de recetas"
"title": "Actualizar lista de recetas",
"quick": "Sincronizar cambios",
"quickTooltip": "Sincronizar cambios - actualización rápida sin reconstruir caché",
"full": "Reconstruir caché",
"fullTooltip": "Reconstruir caché - reescaneo completo de todos los archivos de recetas"
},
"filteredByLora": "Filtrado por LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "Error al reparar la receta: {message}",
"missingId": "No se puede reparar la receta: falta el ID de la receta"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "Mover a la carpeta {otherType}"
"moveToOtherTypeFolder": "Mover a la carpeta {otherType}",
"sendToWorkflow": "Enviar al flujo de trabajo"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "Desfijar barra lateral",
"switchToListView": "Cambiar a vista de lista",
"switchToTreeView": "Cambiar a vista de árbol",
"recursiveOn": "Buscar en subcarpetas",
"recursiveOff": "Buscar solo en la carpeta actual",
"recursiveOn": "Incluir subcarpetas",
"recursiveOff": "Solo carpeta actual",
"recursiveUnavailable": "La búsqueda recursiva solo está disponible en la vista en árbol",
"collapseAllDisabled": "No disponible en vista de lista",
"dragDrop": {
"unableToResolveRoot": "No se puede determinar la ruta de destino para el movimiento.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "El movimiento no es compatible con este elemento.",
"createFolderHint": "Suelta para crear una nueva carpeta",
"newFolderName": "Nombre de la nueva carpeta",
"folderNameHint": "Presiona Enter para confirmar, Escape para cancelar",
"emptyFolderName": "Por favor, introduce un nombre de carpeta",
"invalidFolderName": "El nombre de la carpeta contiene caracteres no válidos",
"noDragState": "No se encontró ninguna operación de arrastre pendiente"
},
"empty": {
"noFolders": "No se encontraron carpetas",
"dragHint": "Arrastra elementos aquí para crear carpetas"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "Acceso temprano",
"earlyAccessTooltip": "Acceso temprano requerido",
"inLibrary": "En la biblioteca",
"downloaded": "Descargado",
"downloadedTooltip": "Descargado anteriormente, pero actualmente no está en tu biblioteca.",
"alreadyInLibrary": "Ya en la biblioteca",
"autoOrganizedPath": "[Auto-organizado por plantilla de ruta]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "Actualizar modelo base",
"cancel": "Cancelar"
},
"bulkDownloadMissingLoras": {
"title": "Descargar LoRAs faltantes",
"message": "Se encontraron {uniqueCount} LoRAs faltantes únicos (de {totalCount} en total entre las recetas seleccionadas).",
"previewTitle": "LoRAs para descargar:",
"moreItems": "...y {count} más",
"note": "Los archivos se descargarán usando las plantillas de ruta predeterminadas. Esto puede tomar un tiempo dependiendo del número de LoRAs.",
"downloadButton": "Descargar {count} LoRA(s)"
},
"exampleAccess": {
"title": "Imágenes de ejemplo locales",
"message": "No se encontraron imágenes de ejemplo locales para este modelo. Opciones de visualización:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Ver en Civitai",
"viewOnCivitaiText": "Ver en Civitai",
"viewCreatorProfile": "Ver perfil del creador",
"openFileLocation": "Abrir ubicación del archivo"
"openFileLocation": "Abrir ubicación del archivo",
"sendToWorkflow": "Enviar a ComfyUI",
"sendToWorkflowText": "Enviar a ComfyUI"
},
"openFileLocation": {
"success": "Ubicación del archivo abierta exitosamente",
@@ -964,6 +1170,9 @@
"copied": "Ruta copiada al portapapeles: {{path}}",
"clipboardFallback": "Ruta: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "No se puede enviar a ComfyUI: no hay ruta de archivo disponible"
},
"metadata": {
"version": "Versión",
"fileName": "Nombre de archivo",
@@ -1000,6 +1209,8 @@
"cancel": "Cancelar edición",
"save": "Guardar cambios",
"addPlaceholder": "Escribe para añadir o haz clic en sugerencias de abajo",
"editWord": "Editar palabra de activación",
"editPlaceholder": "Editar palabra de activación",
"copyWord": "Copiar palabra clave",
"deleteWord": "Eliminar palabra clave",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "en {count}d"
},
"badges": {
"current": "Versión actual",
"current": "Versión abierta",
"currentTooltip": "Es la versión con la que abriste este modal",
"inLibrary": "En la biblioteca",
"inLibraryTooltip": "Esta versión existe en tu biblioteca local",
"downloaded": "Descargado",
"downloadedTooltip": "Esta versión se descargó antes, pero ahora no está en tu biblioteca",
"newer": "Versión más reciente",
"newerTooltip": "Esta versión es más reciente que tu última versión local",
"earlyAccess": "Acceso temprano",
"ignored": "Ignorada"
"earlyAccessTooltip": "Esta versión requiere actualmente acceso temprano de Civitai",
"ignored": "Ignorada",
"ignoredTooltip": "Las notificaciones de actualización están desactivadas para esta versión",
"onSiteOnly": "Solo en Sitio",
"onSiteOnlyTooltip": "Esta versión solo está disponible para generación en el sitio de Civitai"
},
"actions": {
"download": "Descargar",
"downloadTooltip": "Descargar esta versión",
"downloadEarlyAccessTooltip": "Descargar esta versión de acceso temprano desde Civitai",
"downloadNotAllowedTooltip": "Esta versión solo está disponible para generación en el sitio de Civitai",
"delete": "Eliminar",
"deleteTooltip": "Eliminar esta versión local",
"ignore": "Ignorar",
"unignore": "Dejar de ignorar",
"ignoreTooltip": "Ignorar las notificaciones de actualización de esta versión",
"unignoreTooltip": "Reanudar las notificaciones de actualización de esta versión",
"viewVersionOnCivitai": "Ver versión en Civitai",
"earlyAccessTooltip": "Requiere compra de acceso temprano",
"resumeModelUpdates": "Reanudar actualizaciones para este modelo",
"ignoreModelUpdates": "Ignorar actualizaciones para este modelo",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "Receta reemplazada en el flujo de trabajo",
"recipeFailedToSend": "Error al enviar receta al flujo de trabajo",
"noMatchingNodes": "No hay nodos compatibles disponibles en el flujo de trabajo actual",
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino"
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino",
"modelUpdated": "Modelo actualizado en el flujo de trabajo",
"modelFailed": "Error al actualizar nodo de modelo"
},
"nodeSelector": {
"recipe": "Receta",
@@ -1235,6 +1464,10 @@
"opened": "Carpeta de imágenes de ejemplo abierta",
"openingFolder": "Abriendo carpeta de imágenes de ejemplo",
"failedToOpen": "Error al abrir carpeta de imágenes de ejemplo",
"copiedPath": "Ruta copiada al portapapeles: {{path}}",
"clipboardFallback": "Ruta: {{path}}",
"copiedUri": "Enlace copiado al portapapeles: {{uri}}",
"uriClipboardFallback": "Enlace: {{uri}}",
"setupRequired": "Almacenamiento de imágenes de ejemplo",
"setupDescription": "Para agregar imágenes de ejemplo personalizadas, primero necesita establecer una ubicación de descarga.",
"setupUsage": "Esta ruta se utiliza tanto para imágenes de ejemplo descargadas como personalizadas.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "Mostrar código QR de WeChat",
"hideWechatQR": "Ocultar código QR de WeChat"
},
"footer": "¡Gracias por usar el gestor de LoRA! ❤️"
"footer": "¡Gracias por usar el gestor de LoRA! ❤️",
"supporters": {
"title": "Gracias a todos los seguidores",
"subtitle": "Gracias a {count} seguidores que hicieron este proyecto posible",
"specialThanks": "Agradecimientos especiales",
"allSupporters": "Todos los seguidores",
"totalCount": "{count} seguidores en total"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "Por favor selecciona una versión",
"versionExists": "Esta versión ya existe en tu biblioteca",
"downloadCompleted": "Descarga completada exitosamente",
"downloadSkippedByBaseModel": "Descarga omitida porque el modelo base {baseModel} está excluido",
"autoOrganizeSuccess": "Auto-organización completada exitosamente para {count} {type}",
"autoOrganizePartialSuccess": "Auto-organización completada con {success} movidos, {failures} fallidos de un total de {total} modelos",
"autoOrganizeFailed": "Auto-organización fallida: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "Error al cargar {modelType}s: {message}",
"refreshComplete": "Actualización completa",
"refreshFailed": "Error al actualizar recetas: {message}",
"syncComplete": "Sincronización completa",
"syncFailed": "Error al sincronizar recetas: {message}",
"updateFailed": "Error al actualizar receta: {error}",
"updateError": "Error actualizando receta: {message}",
"nameSaved": "Receta \"{name}\" guardada exitosamente",
"nameUpdated": "Nombre de receta actualizado exitosamente",
"tagsUpdated": "Etiquetas de receta actualizadas exitosamente",
"sourceUrlUpdated": "URL de origen actualizada exitosamente",
"promptUpdated": "Prompt actualizado exitosamente",
"negativePromptUpdated": "Prompt negativo actualizado exitosamente",
"promptEditorHint": "Presiona Enter para guardar, Shift+Enter para nueva línea",
"noRecipeId": "No hay ID de receta disponible",
"sendToWorkflowFailed": "Error al enviar la receta al flujo de trabajo: {message}",
"copyFailed": "Error copiando sintaxis de receta: {message}",
"noMissingLoras": "No hay LoRAs faltantes para descargar",
"missingLorasInfoFailed": "Error al obtener información de LoRAs faltantes",
@@ -1410,9 +1657,20 @@
"processingError": "Error de procesamiento: {message}",
"folderBrowserError": "Error cargando explorador de carpetas: {message}",
"recipeSaveFailed": "Error al guardar receta: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "Importación falló: {message}",
"folderTreeFailed": "Error al cargar árbol de carpetas",
"folderTreeError": "Error cargando árbol de carpetas"
"folderTreeError": "Error cargando árbol de carpetas",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "No se han seleccionado recetas",
"noMissingLorasInSelection": "No se encontraron LoRAs faltantes en las recetas seleccionadas",
"noLoraRootConfigured": "No se ha configurado el directorio raíz de LoRA. Por favor, establezca un directorio raíz de LoRA predeterminado en la configuración."
},
"models": {
"noModelsSelected": "No hay modelos seleccionados",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "Error al guardar mapeos de modelo base: {message}",
"downloadTemplatesUpdated": "Plantillas de rutas de descarga actualizadas",
"downloadTemplatesFailed": "Error al guardar plantillas de rutas de descarga: {message}",
"recipesPathUpdated": "Ruta de almacenamiento de recetas actualizada",
"recipesPathSaveFailed": "Error al actualizar la ruta de almacenamiento de recetas: {message}",
"settingsUpdated": "Configuración actualizada: {setting}",
"compactModeToggled": "Modo compacto {state}",
"settingSaveFailed": "Error al guardar configuración: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "No se pudieron cargar palabras entrenadas",
"tooLong": "La palabra clave no debe exceder 100 palabras",
"tooMany": "Máximo 30 palabras clave permitidas",
"tooLong": "La palabra clave no debe exceder 500 palabras",
"tooMany": "Máximo 100 palabras clave permitidas",
"alreadyExists": "Esta palabra clave ya existe",
"updateSuccess": "Palabras clave actualizadas exitosamente",
"updateFailed": "Error al actualizar palabras clave",
@@ -1591,6 +1851,8 @@
"deleteFailed": "Error al eliminar {type}: {message}",
"excludeSuccess": "{type} excluido exitosamente",
"excludeFailed": "Error al excluir {type}: {message}",
"restoreSuccess": "{type} restaurado correctamente",
"restoreFailed": "No se pudo restaurar {type}: {message}",
"fileNameUpdated": "Nombre de archivo actualizado exitosamente",
"fileRenameFailed": "Error al renombrar archivo: {error}",
"previewUpdated": "Vista previa actualizada exitosamente",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "Diagnósticos del sistema",
"title": "Doctor",
"buttonTitle": "Ejecutar diagnósticos y correcciones comunes",
"loading": "Comprobando el entorno...",
"footer": "Exporta un paquete de diagnóstico si el problema persiste después de la reparación.",
"summary": {
"idle": "Ejecuta una comprobación del estado de la configuración, la integridad de la caché y la coherencia de la interfaz.",
"ok": "No se encontraron problemas activos en el entorno actual.",
"warning": "Se encontraron {count} problema(s). La mayoría se puede solucionar directamente desde este panel.",
"error": "Se encontraron {count} problema(s). Deben atenderse antes de que la aplicación esté completamente saludable."
},
"status": {
"ok": "Saludable",
"warning": "Requiere atención",
"error": "Se requiere acción"
},
"actions": {
"runAgain": "Ejecutar de nuevo",
"exportBundle": "Exportar paquete"
},
"toast": {
"loadFailed": "Error al cargar los diagnósticos: {message}",
"repairSuccess": "Reconstrucción de caché completada.",
"repairFailed": "Error al reconstruir la caché: {message}",
"exportSuccess": "Paquete de diagnósticos exportado.",
"exportFailed": "Error al exportar el paquete de diagnósticos: {message}",
"conflictsResolved": "{count} conflicto(s) de nombre de archivo resuelto(s).",
"conflictsResolveFailed": "Error al resolver conflictos de nombre de archivo: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "Actualización de la aplicación detectada",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "Annuler",
"confirm": "Confirmer",
"actions": {
"save": "Enregistrer",
"cancel": "Annuler",
"confirm": "Confirmer",
"delete": "Supprimer",
"move": "Déplacer",
"refresh": "Actualiser",
@@ -11,7 +14,9 @@
"backToTop": "Retour en haut",
"settings": "Paramètres",
"help": "Aide",
"add": "Ajouter"
"add": "Ajouter",
"close": "Fermer",
"menu": "Menu"
},
"status": {
"loading": "Chargement...",
@@ -171,6 +176,9 @@
"success": "{count} recettes réparées avec succès.",
"cancelled": "Réparation annulée. {count} recettes ont été réparées.",
"error": "Échec de la réparation des recettes : {message}"
},
"manageExcludedModels": {
"label": "Gérer les modèles exclus"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "Le préréglage \"{name}\" existe déjà. Remplacer?",
"presetNamePlaceholder": "Nom du préréglage...",
"baseModel": "Modèle de base",
"baseModelSearchPlaceholder": "Rechercher des modèles de base...",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
"modelTypes": "Types de modèles",
"license": "Licence",
"noCreditRequired": "Crédit non requis",
"allowSellingGeneratedContent": "Vente autorisée",
"noTags": "Aucun tag",
"noBaseModelMatches": "Aucun modèle de base ne correspond à la recherche actuelle.",
"clearAll": "Effacer tous les filtres",
"any": "N'importe quel",
"all": "Tous",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Clé API Civitai",
"civitaiApiKeyPlaceholder": "Entrez votre clé API Civitai",
"civitaiApiKeyHelp": "Utilisée pour l'authentification lors du téléchargement de modèles depuis Civitai",
"civitaiHost": {
"label": "Hôte Civitai",
"help": "Choisissez quel site Civitai s'ouvre lorsque vous utilisez les liens « View on Civitai ».",
"options": {
"com": "civitai.com (SFW uniquement)",
"red": "civitai.red (sans restriction)"
}
},
"downloadBackend": {
"label": "Moteur de téléchargement",
"help": "Choisissez comment les fichiers de modèles sont téléchargés. Python utilise le téléchargeur intégré. aria2 utilise le processus externe expérimental de téléchargement.",
"options": {
"python": "Python (intégré)",
"aria2": "aria2 (expérimental)"
}
},
"aria2cPath": {
"label": "Chemin vers aria2c",
"help": "Chemin facultatif vers lexécutable aria2c. Laissez vide pour utiliser aria2c depuis le PATH système.",
"placeholder": "Laisser vide pour utiliser aria2c depuis le PATH"
},
"aria2HelpLink": "Apprenez à configurer le backend de téléchargement aria2",
"civitaiHostBanner": {
"title": "Préférence dhôte Civitai disponible",
"content": "Civitai utilise désormais civitai.com pour le contenu SFW et civitai.red pour le contenu sans restriction. Vous pouvez modifier dans les paramètres le site ouvert par défaut.",
"openSettings": "Ouvrir les paramètres"
},
"openSettingsFileLocation": {
"label": "Ouvrir le dossier des paramètres",
"tooltip": "Ouvrir le dossier contenant settings.json",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "Filtrage du contenu",
"downloads": "Téléchargements",
"videoSettings": "Paramètres vidéo",
"layoutSettings": "Paramètres d'affichage",
"misc": "Divers",
"backup": "Sauvegardes",
"folderSettings": "Racines par défaut",
"recipeSettings": "Recipes",
"extraFolderPaths": "Chemins de dossiers supplémentaires",
"downloadPathTemplates": "Modèles de chemin de téléchargement",
"priorityTags": "Étiquettes prioritaires",
@@ -287,7 +327,15 @@
"blurNsfwContent": "Flouter le contenu NSFW",
"blurNsfwContentHelp": "Flouter les images d'aperçu de contenu pour adultes (NSFW)",
"showOnlySfw": "Afficher uniquement les résultats SFW",
"showOnlySfwHelp": "Filtrer tout le contenu NSFW lors de la navigation et de la recherche"
"showOnlySfwHelp": "Filtrer tout le contenu NSFW lors de la navigation et de la recherche",
"matureBlurThreshold": "Seuil de floutage pour contenu adulte",
"matureBlurThresholdHelp": "Définir à partir de quel niveau de classification le floutage s'applique lorsque le floutage NSFW est activé.",
"matureBlurThresholdOptions": {
"pg13": "PG13 et plus",
"r": "R et plus (par défaut)",
"x": "X et plus",
"xxx": "XXX uniquement"
}
},
"videoSettings": {
"autoplayOnHover": "Lecture automatique vidéo au survol",
@@ -311,6 +359,54 @@
"saveFailed": "Impossible d'enregistrer les chemins à ignorer : {message}"
}
},
"backup": {
"autoEnabled": "Sauvegardes automatiques",
"autoEnabledHelp": "Crée un instantané local une fois par jour et conserve les plus récents selon la politique de rétention.",
"retention": "Nombre de rétention",
"retentionHelp": "Combien d'instantanés automatiques conserver avant de supprimer les plus anciens.",
"management": "Gestion des sauvegardes",
"managementHelp": "Exporte l'état actuel de l'utilisateur ou restaure-le depuis une archive de sauvegarde.",
"scopeHelp": "Inclut vos paramètres, l'historique des téléchargements et l'état des mises à jour des modèles. Les fichiers de modèle et les caches régénérables ne sont pas inclus.",
"locationSummary": "Emplacement actuel des sauvegardes",
"openFolderButton": "Ouvrir le dossier de sauvegarde",
"openFolderSuccess": "Dossier de sauvegarde ouvert",
"openFolderFailed": "Impossible d'ouvrir le dossier de sauvegarde",
"locationCopied": "Chemin de sauvegarde copié dans le presse-papiers : {{path}}",
"locationClipboardFallback": "Chemin de sauvegarde : {{path}}",
"exportButton": "Exporter la sauvegarde",
"exportSuccess": "Sauvegarde exportée avec succès.",
"exportFailed": "Échec de l'export de la sauvegarde : {message}",
"importButton": "Importer la sauvegarde",
"importConfirm": "Importer cette sauvegarde et écraser l'état local de l'utilisateur ?",
"importSuccess": "Sauvegarde importée avec succès.",
"importFailed": "Échec de l'import de la sauvegarde : {message}",
"latestSnapshot": "Dernier instantané",
"latestAutoSnapshot": "Dernier instantané automatique",
"snapshotCount": "Instantanés enregistrés",
"noneAvailable": "Aucun instantané pour le moment"
},
"downloadSkipBaseModels": {
"label": "Ignorer les téléchargements pour certains modèles de base",
"help": "Sapplique à tous les flux de téléchargement. Seuls les modèles de base pris en charge peuvent être sélectionnés ici.",
"searchPlaceholder": "Filtrer les modèles de base...",
"empty": "Aucun modèle de base ne correspond à la recherche actuelle.",
"summary": {
"none": "Aucune sélection",
"count": "{count} sélectionnés"
},
"actions": {
"edit": "Modifier",
"collapse": "Réduire",
"clear": "Effacer"
},
"validation": {
"saveFailed": "Impossible denregistrer les modèles de base exclus : {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "Ignorer les versions de modèles précédemment téléchargées",
"help": "Lorsque activé, LoRA Manager ignorera le téléchargement d'une version de modèle si le service d'historique des téléchargements enregistre cette version exacte comme déjà téléchargée. S'applique à tous les flux de téléchargement."
},
"layoutSettings": {
"displayDensity": "Densité d'affichage",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "Révéler au survol"
},
"cardInfoDisplayHelp": "Choisissez quand afficher les informations du modèle et les boutons d'action",
"showVersionOnCard": "Afficher la version sur la carte",
"showVersionOnCardHelp": "Afficher ou masquer le nom de version sur les cartes de modèle",
"modelCardFooterAction": "Action du bouton de carte de modèle",
"modelCardFooterActionOptions": {
"exampleImages": "Ouvrir les images d'exemple",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "Définir le répertoire racine Diffusion Model (UNET) par défaut pour les téléchargements, imports et déplacements",
"defaultEmbeddingRoot": "Racine Embedding",
"defaultEmbeddingRootHelp": "Définir le répertoire racine embedding par défaut pour les téléchargements, imports et déplacements",
"recipesPath": "Recipes Storage Path",
"recipesPathHelp": "Optional custom directory for stored recipes. Leave empty to use the first LoRA root's recipes folder.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "Migrating recipes storage...",
"noDefault": "Aucun par défaut"
},
"extraFolderPaths": {
"title": "Chemins de dossiers supplémentaires",
"description": "Chemins racine de modèles supplémentaires exclusifs à LoRA Manager. Chargez des modèles depuis des emplacements en dehors des dossiers standard de ComfyUI, idéal pour les grandes bibliothèques qui ralentiraient autrement ComfyUI.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "Chemins LoRA",
"checkpoint": "Chemins Checkpoint",
"unet": "Chemins de modèle de diffusion",
"embedding": "Chemins Embedding"
},
"pathPlaceholder": "/chemin/vers/modèles/supplémentaires",
"saveSuccess": "Chemins de dossiers supplémentaires mis à jour. Redémarrage requis pour appliquer les changements.",
"saveError": "Échec de la mise à jour des chemins de dossiers supplémentaires: {message}",
"validation": {
"duplicatePath": "Ce chemin est déjà configuré"
}
},
"priorityTags": {
"title": "Étiquettes prioritaires",
"description": "Personnalisez l'ordre de priorité des étiquettes pour chaque type de modèle (par ex. : character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Entrez le chemin du dossier où les images d'exemple de Civitai seront sauvegardées",
"autoDownload": "Téléchargement automatique des images d'exemple",
"autoDownloadHelp": "Télécharger automatiquement les images d'exemple pour les modèles qui n'en ont pas (nécessite que l'emplacement de téléchargement soit défini)",
"openMode": "Action douverture des images dexemple",
"openModeHelp": "Choisissez si laction souvre sur le serveur, copie un chemin local mappé ou lance une URI personnalisée.",
"openModeOptions": {
"system": "Ouvrir sur le serveur",
"clipboard": "Copier le chemin local",
"uriTemplate": "Ouvrir une URI personnalisée"
},
"localRoot": "Racine locale des images dexemple",
"localRootHelp": "Racine locale ou montée facultative qui reflète le répertoire des images dexemple du serveur. Si vide, le chemin du serveur est réutilisé.",
"localRootPlaceholder": "Exemple : /Volumes/ComfyUI/example_images",
"uriTemplate": "Ouvrir le modèle dURI",
"uriTemplateHelp": "Utilisez un lien profond personnalisé, tel quune URI de fichier ou un lien Shortcuts.",
"uriTemplatePlaceholder": "Exemple : shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "Paramètres disponibles : {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "En savoir plus sur les modes d'ouverture à distance",
"optimizeImages": "Optimiser les images téléchargées",
"optimizeImagesHelp": "Optimiser les images d'exemple pour réduire la taille du fichier et améliorer la vitesse de chargement (les métadonnées seront préservées)",
"download": "Télécharger",
@@ -485,23 +619,6 @@
"proxyPassword": "Mot de passe (optionnel)",
"proxyPasswordPlaceholder": "mot_de_passe",
"proxyPasswordHelp": "Mot de passe pour l'authentification proxy (si nécessaire)"
},
"extraFolderPaths": {
"title": "Chemins de dossiers supplémentaires",
"help": "Ajoutez des dossiers de modèles supplémentaires en dehors des chemins standard de ComfyUI. Ces chemins sont stockés séparément et analysés aux côtés des dossiers par défaut.",
"description": "Configurez des dossiers supplémentaires pour l'analyse de modèles. Ces chemins sont spécifiques à LoRA Manager et seront fusionnés avec les chemins par défaut de ComfyUI.",
"modelTypes": {
"lora": "Chemins LoRA",
"checkpoint": "Chemins Checkpoint",
"unet": "Chemins de modèle de diffusion",
"embedding": "Chemins Embedding"
},
"pathPlaceholder": "/chemin/vers/modèles/supplémentaires",
"saveSuccess": "Chemins de dossiers supplémentaires mis à jour.",
"saveError": "Échec de la mise à jour des chemins de dossiers supplémentaires: {message}",
"validation": {
"duplicatePath": "Ce chemin est déjà configuré"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "Auto-organiser la sélection",
"skipMetadataRefresh": "Ignorer l'actualisation des métadonnées pour la sélection",
"resumeMetadataRefresh": "Reprendre l'actualisation des métadonnées pour la sélection",
"deleteAll": "Supprimer tous les modèles",
"deleteAll": "Supprimer la sélection",
"downloadMissingLoras": "Télécharger les LoRAs manquants",
"clear": "Effacer la sélection",
"skipMetadataRefreshCount": "Ignorer{count} modèles",
"resumeMetadataRefreshCount": "Reprendre{count} modèles",
@@ -600,6 +718,7 @@
"moveToFolder": "Déplacer vers un dossier",
"repairMetadata": "Réparer les métadonnées",
"excludeModel": "Exclure le modèle",
"restoreModel": "Restaurer le modèle",
"deleteModel": "Supprimer le modèle",
"shareRecipe": "Partager la recipe",
"viewAllLoras": "Voir tous les LoRAs",
@@ -641,6 +760,8 @@
"root": "Racine",
"browseFolders": "Parcourir les dossiers :",
"downloadAndSaveRecipe": "Télécharger et sauvegarder la recipe",
"importRecipeOnly": "Importer uniquement la recette",
"importAndDownload": "Importer et télécharger",
"downloadMissingLoras": "Télécharger les LoRAs manquants",
"saveRecipe": "Sauvegarder la recipe",
"loraCountInfo": "({existing}/{total} dans la bibliothèque)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "Moins"
},
"refresh": {
"title": "Actualiser la liste des recipes"
"title": "Actualiser la liste des recipes",
"quick": "Synchroniser les changements",
"quickTooltip": "Synchroniser les changements - actualisation rapide sans reconstruire le cache",
"full": "Reconstruire le cache",
"fullTooltip": "Reconstruire le cache - rescan complet de tous les fichiers de recipes"
},
"filteredByLora": "Filtré par LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "Échec de la réparation de la recette : {message}",
"missingId": "Impossible de réparer la recette : ID de recette manquant"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "Déplacer vers le dossier {otherType}"
"moveToOtherTypeFolder": "Déplacer vers le dossier {otherType}",
"sendToWorkflow": "Envoyer vers le workflow"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "Désépingler la barre latérale",
"switchToListView": "Passer en vue liste",
"switchToTreeView": "Passer en vue arborescence",
"recursiveOn": "Rechercher dans les sous-dossiers",
"recursiveOff": "Rechercher uniquement dans le dossier actuel",
"recursiveOn": "Inclure les sous-dossiers",
"recursiveOff": "Dossier actuel uniquement",
"recursiveUnavailable": "La recherche récursive n'est disponible qu'en vue arborescente",
"collapseAllDisabled": "Non disponible en vue liste",
"dragDrop": {
"unableToResolveRoot": "Impossible de déterminer le chemin de destination pour le déplacement.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Le déplacement n'est pas pris en charge pour cet élément.",
"createFolderHint": "Relâcher pour créer un nouveau dossier",
"newFolderName": "Nom du nouveau dossier",
"folderNameHint": "Appuyez sur Entrée pour confirmer, Échap pour annuler",
"emptyFolderName": "Veuillez saisir un nom de dossier",
"invalidFolderName": "Le nom du dossier contient des caractères invalides",
"noDragState": "Aucune opération de glissement en attente trouvée"
},
"empty": {
"noFolders": "Aucun dossier trouvé",
"dragHint": "Faites glisser des éléments ici pour créer des dossiers"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "Accès anticipé",
"earlyAccessTooltip": "Accès anticipé requis",
"inLibrary": "Dans la bibliothèque",
"downloaded": "Téléchargé",
"downloadedTooltip": "Déjà téléchargé, mais il n'est actuellement pas dans votre bibliothèque.",
"alreadyInLibrary": "Déjà dans la bibliothèque",
"autoOrganizedPath": "[Auto-organisé par modèle de chemin]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "Mettre à jour le modèle de base",
"cancel": "Annuler"
},
"bulkDownloadMissingLoras": {
"title": "Télécharger les LoRAs manquants",
"message": "{uniqueCount} LoRAs manquants uniques trouvés (sur un total de {totalCount} dans les recettes sélectionnées).",
"previewTitle": "LoRAs à télécharger :",
"moreItems": "...et {count} de plus",
"note": "Les fichiers seront téléchargés en utilisant les modèles de chemins par défaut. Cela peut prendre un certain temps selon le nombre de LoRAs.",
"downloadButton": "Télécharger {count} LoRA(s)"
},
"exampleAccess": {
"title": "Images d'exemple locales",
"message": "Aucune image d'exemple locale trouvée pour ce modèle. Options d'affichage :",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Voir sur Civitai",
"viewOnCivitaiText": "Voir sur Civitai",
"viewCreatorProfile": "Voir le profil du créateur",
"openFileLocation": "Ouvrir l'emplacement du fichier"
"openFileLocation": "Ouvrir l'emplacement du fichier",
"sendToWorkflow": "Envoyer vers ComfyUI",
"sendToWorkflowText": "Envoyer vers ComfyUI"
},
"openFileLocation": {
"success": "Emplacement du fichier ouvert avec succès",
@@ -964,6 +1170,9 @@
"copied": "Chemin copié dans le presse-papiers: {{path}}",
"clipboardFallback": "Chemin: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "Impossible d'envoyer vers ComfyUI : aucun chemin de fichier disponible"
},
"metadata": {
"version": "Version",
"fileName": "Nom de fichier",
@@ -1000,6 +1209,8 @@
"cancel": "Annuler la modification",
"save": "Sauvegarder les modifications",
"addPlaceholder": "Tapez pour ajouter ou cliquez sur les suggestions ci-dessous",
"editWord": "Modifier le mot déclencheur",
"editPlaceholder": "Modifier le mot déclencheur",
"copyWord": "Copier le mot-clé",
"deleteWord": "Supprimer le mot-clé",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "dans {count}j"
},
"badges": {
"current": "Version actuelle",
"current": "Version ouverte",
"currentTooltip": "C'est la version à partir de laquelle cette fenêtre a été ouverte",
"inLibrary": "Dans la bibliothèque",
"inLibraryTooltip": "Cette version existe dans votre bibliothèque locale",
"downloaded": "Téléchargé",
"downloadedTooltip": "Cette version a déjà été téléchargée, mais n'est pas actuellement dans votre bibliothèque",
"newer": "Version plus récente",
"newerTooltip": "Cette version est plus récente que votre dernière version locale",
"earlyAccess": "Accès anticipé",
"ignored": "Ignorée"
"earlyAccessTooltip": "Cette version nécessite actuellement l'accès anticipé Civitai",
"ignored": "Ignorée",
"ignoredTooltip": "Les notifications de mise à jour sont désactivées pour cette version",
"onSiteOnly": "Uniquement sur Site",
"onSiteOnlyTooltip": "Cette version n'est disponible que pour la génération sur le site Civitai"
},
"actions": {
"download": "Télécharger",
"downloadTooltip": "Télécharger cette version",
"downloadEarlyAccessTooltip": "Télécharger cette version en accès anticipé depuis Civitai",
"downloadNotAllowedTooltip": "Cette version n'est disponible que pour la génération sur le site Civitai",
"delete": "Supprimer",
"deleteTooltip": "Supprimer cette version locale",
"ignore": "Ignorer",
"unignore": "Ne plus ignorer",
"ignoreTooltip": "Ignorer les notifications de mise à jour pour cette version",
"unignoreTooltip": "Reprendre les notifications de mise à jour pour cette version",
"viewVersionOnCivitai": "Voir la version sur Civitai",
"earlyAccessTooltip": "Nécessite l'achat de l'accès anticipé",
"resumeModelUpdates": "Reprendre les mises à jour pour ce modèle",
"ignoreModelUpdates": "Ignorer les mises à jour pour ce modèle",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "Recipe remplacée dans le workflow",
"recipeFailedToSend": "Échec de l'envoi de la recipe au workflow",
"noMatchingNodes": "Aucun nœud compatible disponible dans le workflow actuel",
"noTargetNodeSelected": "Aucun nœud cible sélectionné"
"noTargetNodeSelected": "Aucun nœud cible sélectionné",
"modelUpdated": "Modèle mis à jour dans le workflow",
"modelFailed": "Échec de la mise à jour du nœud modèle"
},
"nodeSelector": {
"recipe": "Recipe",
@@ -1235,6 +1464,10 @@
"opened": "Dossier d'images d'exemple ouvert",
"openingFolder": "Ouverture du dossier d'images d'exemple",
"failedToOpen": "Échec de l'ouverture du dossier d'images d'exemple",
"copiedPath": "Chemin copié dans le presse-papiers : {{path}}",
"clipboardFallback": "Chemin : {{path}}",
"copiedUri": "Lien copié dans le presse-papiers : {{uri}}",
"uriClipboardFallback": "Lien : {{uri}}",
"setupRequired": "Stockage d'images d'exemple",
"setupDescription": "Pour ajouter des images d'exemple personnalisées, vous devez d'abord définir un emplacement de téléchargement.",
"setupUsage": "Ce chemin est utilisé pour les images d'exemple téléchargées et personnalisées.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "Afficher le QR Code WeChat",
"hideWechatQR": "Masquer le QR Code WeChat"
},
"footer": "Merci d'utiliser le Gestionnaire LoRA ! ❤️"
"footer": "Merci d'utiliser le Gestionnaire LoRA ! ❤️",
"supporters": {
"title": "Merci à tous les supporters",
"subtitle": "Merci aux {count} supporters qui ont rendu ce projet possible",
"specialThanks": "Remerciements spéciaux",
"allSupporters": "Tous les supporters",
"totalCount": "{count} supporters au total"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "Veuillez sélectionner une version",
"versionExists": "Cette version existe déjà dans votre bibliothèque",
"downloadCompleted": "Téléchargement terminé avec succès",
"downloadSkippedByBaseModel": "Téléchargement ignoré, car le modèle de base {baseModel} est exclu",
"autoOrganizeSuccess": "Auto-organisation terminée avec succès pour {count} {type}",
"autoOrganizePartialSuccess": "Auto-organisation terminée avec {success} déplacés, {failures} échecs sur {total} modèles",
"autoOrganizeFailed": "Échec de l'auto-organisation : {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "Échec du chargement des {modelType}s : {message}",
"refreshComplete": "Actualisation terminée",
"refreshFailed": "Échec de l'actualisation des recipes : {message}",
"syncComplete": "Synchronisation terminée",
"syncFailed": "Échec de la synchronisation des recipes : {message}",
"updateFailed": "Échec de la mise à jour de la recipe : {error}",
"updateError": "Erreur lors de la mise à jour de la recipe : {message}",
"nameSaved": "Recipe \"{name}\" sauvegardée avec succès",
"nameUpdated": "Nom de la recipe mis à jour avec succès",
"tagsUpdated": "Tags de la recipe mis à jour avec succès",
"sourceUrlUpdated": "URL source mise à jour avec succès",
"promptUpdated": "Prompt mis à jour avec succès",
"negativePromptUpdated": "Prompt négatif mis à jour avec succès",
"promptEditorHint": "Appuyez sur Entrée pour sauvegarder, Maj+Entrée pour nouvelle ligne",
"noRecipeId": "Aucun ID de recipe disponible",
"sendToWorkflowFailed": "Échec de l'envoi de la recette vers le workflow : {message}",
"copyFailed": "Erreur lors de la copie de la syntaxe de la recipe : {message}",
"noMissingLoras": "Aucun LoRA manquant à télécharger",
"missingLorasInfoFailed": "Échec de l'obtention des informations pour les LoRAs manquants",
@@ -1410,9 +1657,20 @@
"processingError": "Erreur de traitement : {message}",
"folderBrowserError": "Erreur lors du chargement du navigateur de dossiers : {message}",
"recipeSaveFailed": "Échec de la sauvegarde de la recipe : {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "Échec de l'importation : {message}",
"folderTreeFailed": "Échec du chargement de l'arborescence des dossiers",
"folderTreeError": "Erreur lors du chargement de l'arborescence des dossiers"
"folderTreeError": "Erreur lors du chargement de l'arborescence des dossiers",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "Aucune recette sélectionnée",
"noMissingLorasInSelection": "Aucun LoRA manquant trouvé dans les recettes sélectionnées",
"noLoraRootConfigured": "Aucun répertoire racine LoRA configuré. Veuillez définir un répertoire racine LoRA par défaut dans les paramètres."
},
"models": {
"noModelsSelected": "Aucun modèle sélectionné",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "Échec de la sauvegarde des mappages de modèle de base : {message}",
"downloadTemplatesUpdated": "Modèles de chemin de téléchargement mis à jour",
"downloadTemplatesFailed": "Échec de la sauvegarde des modèles de chemin de téléchargement : {message}",
"recipesPathUpdated": "Recipes storage path updated",
"recipesPathSaveFailed": "Failed to update recipes storage path: {message}",
"settingsUpdated": "Paramètres mis à jour : {setting}",
"compactModeToggled": "Mode compact {state}",
"settingSaveFailed": "Échec de la sauvegarde du paramètre : {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "Impossible de charger les mots entraînés",
"tooLong": "Le mot-clé ne doit pas dépasser 100 mots",
"tooMany": "Maximum 30 mots-clés autorisés",
"tooLong": "Le mot-clé ne doit pas dépasser 500 mots",
"tooMany": "Maximum 100 mots-clés autorisés",
"alreadyExists": "Ce mot-clé existe déjà",
"updateSuccess": "Mots-clés mis à jour avec succès",
"updateFailed": "Échec de la mise à jour des mots-clés",
@@ -1591,6 +1851,8 @@
"deleteFailed": "Échec de la suppression de {type} : {message}",
"excludeSuccess": "{type} exclu avec succès",
"excludeFailed": "Échec de l'exclusion de {type} : {message}",
"restoreSuccess": "{type} restauré avec succès",
"restoreFailed": "Échec de la restauration de {type} : {message}",
"fileNameUpdated": "Nom de fichier mis à jour avec succès",
"fileRenameFailed": "Échec du renommage du fichier : {error}",
"previewUpdated": "Aperçu mis à jour avec succès",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "Diagnostics système",
"title": "Docteur",
"buttonTitle": "Lancer les diagnostics et les corrections courantes",
"loading": "Vérification de l'environnement...",
"footer": "Exportez un lot de diagnostic si le problème persiste après la réparation.",
"summary": {
"idle": "Lancez une vérification de l'état des paramètres, de l'intégrité du cache et de la cohérence de l'interface.",
"ok": "Aucun problème actif n'a été trouvé dans l'environnement actuel.",
"warning": "{count} problème(s) ont été trouvés. La plupart peuvent être corrigés directement depuis ce panneau.",
"error": "{count} problème(s) nécessitent une attention avant que l'application soit entièrement saine."
},
"status": {
"ok": "Sain",
"warning": "Nécessite une attention",
"error": "Action requise"
},
"actions": {
"runAgain": "Relancer",
"exportBundle": "Exporter le lot"
},
"toast": {
"loadFailed": "Échec du chargement des diagnostics : {message}",
"repairSuccess": "Reconstruction du cache terminée.",
"repairFailed": "Échec de la reconstruction du cache : {message}",
"exportSuccess": "Lot de diagnostics exporté.",
"exportFailed": "Échec de l'export du lot de diagnostics : {message}",
"conflictsResolved": "{count} conflit(s) de nom de fichier résolu(s).",
"conflictsResolveFailed": "Échec de la résolution des conflits de nom de fichier : {message}"
}
},
"banners": {
"versionMismatch": {
"title": "Mise à jour de l'application détectée",

View File

@@ -1,17 +1,22 @@
{
"common": {
"cancel": "ביטול",
"confirm": "אישור",
"actions": {
"save": "שמור",
"save": "שמירה",
"cancel": "ביטול",
"delete": "מחק",
"move": עבר",
"refresh": "רענן",
"back": "חזור",
"confirm": "אישור",
"delete": "מחיקה",
"move": "העברה",
"refresh": ענון",
"back": "חזרה",
"next": "הבא",
"backToTop": "חזור למעלה",
"backToTop": "חזרה למעלה",
"settings": "הגדרות",
"help": "עזרה",
"add": "הוסף"
"add": "הוספה",
"close": "סגור",
"menu": "תפריט"
},
"status": {
"loading": "טוען...",
@@ -171,6 +176,9 @@
"success": "תוקנו בהצלחה {count} מתכונים.",
"cancelled": "תיקון בוטל. {count} מתכונים תוקנו.",
"error": "תיקון המתכונים נכשל: {message}"
},
"manageExcludedModels": {
"label": "ניהול מודלים מוחרגים"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "הפריסט \"{name}\" כבר קיים. לדרוס?",
"presetNamePlaceholder": "שם קביעה מראש...",
"baseModel": "מודל בסיס",
"baseModelSearchPlaceholder": "חפש מודלי בסיס...",
"modelTags": "תגיות (20 המובילות)",
"modelTypes": "Model Types",
"modelTypes": "סוגי מודלים",
"license": "רישיון",
"noCreditRequired": "ללא קרדיט נדרש",
"allowSellingGeneratedContent": "אפשר מכירה",
"noTags": "ללא תגיות",
"noBaseModelMatches": "אין מודלי בסיס התואמים לחיפוש הנוכחי.",
"clearAll": "נקה את כל המסננים",
"any": "כלשהו",
"all": "כל התגים",
@@ -246,6 +256,33 @@
"civitaiApiKey": "מפתח API של Civitai",
"civitaiApiKeyPlaceholder": "הזן את מפתח ה-API שלך מ-Civitai",
"civitaiApiKeyHelp": "משמש לאימות בעת הורדת מודלים מ-Civitai",
"civitaiHost": {
"label": "מארח Civitai",
"help": "בחר איזה אתר של Civitai ייפתח בעת שימוש בקישורי \"View on Civitai\".",
"options": {
"com": "civitai.com (SFW בלבד)",
"red": "civitai.red (ללא הגבלות)"
}
},
"downloadBackend": {
"label": "מנגנון הורדה",
"help": "בחר כיצד יורדים קבצי המודל. Python משתמש במוריד המובנה. aria2 משתמש בתהליך הורדה חיצוני ניסיוני.",
"options": {
"python": "Python (מובנה)",
"aria2": "aria2 (ניסיוני)"
}
},
"aria2cPath": {
"label": "נתיב aria2c",
"help": "נתיב אופציונלי לקובץ ההפעלה aria2c. השאר ריק כדי להשתמש ב-aria2c מתוך ה-PATH של המערכת.",
"placeholder": "השאר ריק כדי להשתמש ב-aria2c מתוך ה-PATH"
},
"aria2HelpLink": "למד כיצד להגדיר את מנוע ההורדה aria2",
"civitaiHostBanner": {
"title": "העדפת מארח Civitai זמינה",
"content": "Civitai משתמש כעת ב-civitai.com עבור תוכן SFW וב-civitai.red עבור תוכן ללא הגבלות. ניתן לשנות בהגדרות איזה אתר ייפתח כברירת מחדל.",
"openSettings": "פתח הגדרות"
},
"openSettingsFileLocation": {
"label": "פתח תיקיית הגדרות",
"tooltip": "פתח את התיקייה שמכילה את settings.json",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "סינון תוכן",
"downloads": "הורדות",
"videoSettings": "הגדרות וידאו",
"layoutSettings": "הגדרות פריסה",
"misc": "שונות",
"backup": "גיבויים",
"folderSettings": "תיקיות ברירת מחדל",
"recipeSettings": "מתכונים",
"extraFolderPaths": "נתיבי תיקיות נוספים",
"downloadPathTemplates": "תבניות נתיב הורדה",
"priorityTags": "תגיות עדיפות",
@@ -287,7 +327,15 @@
"blurNsfwContent": "טשטש תוכן NSFW",
"blurNsfwContentHelp": "טשטש תמונות תצוגה מקדימה של תוכן למבוגרים (NSFW)",
"showOnlySfw": "הצג רק תוצאות SFW",
"showOnlySfwHelp": "סנן את כל התוכן ה-NSFW בעת גלישה וחיפוש"
"showOnlySfwHelp": "סנן את כל התוכן ה-NSFW בעת גלישה וחיפוש",
"matureBlurThreshold": "סף טשטוש תוכן מבוגרים",
"matureBlurThresholdHelp": "הגדר מאיזו רמת דירוג מתחיל סינון הטשטוש כאשר טשטוש NSFW מופעל.",
"matureBlurThresholdOptions": {
"pg13": "PG13 ומעלה",
"r": "R ומעלה (ברירת מחדל)",
"x": "X ומעלה",
"xxx": "XXX בלבד"
}
},
"videoSettings": {
"autoplayOnHover": "נגן וידאו אוטומטית בריחוף",
@@ -311,6 +359,54 @@
"saveFailed": "לא ניתן לשמור נתיבי דילוג: {message}"
}
},
"backup": {
"autoEnabled": "גיבויים אוטומטיים",
"autoEnabledHelp": "יוצר צילום מצב מקומי פעם ביום ושומר את הצילומים האחרונים לפי מדיניות השמירה.",
"retention": "כמות שמירה",
"retentionHelp": "כמה צילומי מצב אוטומטיים לשמור לפני שמסירים ישנים.",
"management": "ניהול גיבויים",
"managementHelp": "ייצא את מצב המשתמש הנוכחי או שחזר אותו מארכיון גיבוי.",
"scopeHelp": "כולל את ההגדרות שלך, היסטוריית ההורדות ומצב עדכוני המודלים. אינו כולל קובצי מודל או מטמונים שניתן לשחזר.",
"locationSummary": "מיקום הגיבוי הנוכחי",
"openFolderButton": "פתח את תיקיית הגיבויים",
"openFolderSuccess": "תיקיית הגיבויים נפתחה",
"openFolderFailed": "לא ניתן היה לפתוח את תיקיית הגיבויים",
"locationCopied": "נתיב הגיבוי הועתק ללוח: {{path}}",
"locationClipboardFallback": "נתיב הגיבוי: {{path}}",
"exportButton": "ייצא גיבוי",
"exportSuccess": "הגיבוי יוצא בהצלחה.",
"exportFailed": "נכשל ייצוא הגיבוי: {message}",
"importButton": "ייבא גיבוי",
"importConfirm": "לייבא את הגיבוי הזה ולדרוס את מצב המשתמש המקומי?",
"importSuccess": "הגיבוי יובא בהצלחה.",
"importFailed": "נכשל ייבוא הגיבוי: {message}",
"latestSnapshot": "צילום המצב האחרון",
"latestAutoSnapshot": "צילום המצב האוטומטי האחרון",
"snapshotCount": "צילומי מצב שמורים",
"noneAvailable": "עדיין אין צילומי מצב"
},
"downloadSkipBaseModels": {
"label": "דלג על הורדות עבור מודלי בסיס",
"help": "חל על כל תהליכי ההורדה. ניתן לבחור כאן רק מודלי בסיס נתמכים.",
"searchPlaceholder": "סנן מודלי בסיס...",
"empty": "אין מודלי בסיס התואמים לחיפוש הנוכחי.",
"summary": {
"none": "לא נבחר דבר",
"count": "{count} נבחרו"
},
"actions": {
"edit": "עריכה",
"collapse": "כווץ",
"clear": "נקה"
},
"validation": {
"saveFailed": "לא ניתן לשמור את מודלי הבסיס המוחרגים: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "דלג על גרסאות מודלים שהורדו בעבר",
"help": "כאשר מופעל, LoRA Manager ידלג על הורדת גרסת מודל אם שירות היסטוריית ההורדות רושם את הגרסה המדויקת הזו ככבר שהורדה. חל על כל תהליכי ההורדה."
},
"layoutSettings": {
"displayDensity": "צפיפות תצוגה",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "חשוף בריחוף"
},
"cardInfoDisplayHelp": "בחר מתי להציג מידע על המודל וכפתורי פעולה",
"showVersionOnCard": "הצג גרסה בכרטיס",
"showVersionOnCardHelp": "הצג או הסתר את שם הגרסה בכרטיסי המודל",
"modelCardFooterAction": "פעולת כפתור כרטיס מודל",
"modelCardFooterActionOptions": {
"exampleImages": "פתח תמונות דוגמה",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "הגדר את ספריית השורש המוגדרת כברירת מחדל של Diffusion Model (UNET) להורדות, ייבוא והעברות",
"defaultEmbeddingRoot": "תיקיית שורש Embedding",
"defaultEmbeddingRootHelp": "הגדר את ספריית השורש המוגדרת כברירת מחדל של embedding להורדות, ייבוא והעברות",
"recipesPath": "נתיב אחסון מתכונים",
"recipesPathHelp": "ספרייה מותאמת אישית אופציונלית למתכונים שנשמרו. השאר ריק כדי להשתמש בתיקיית recipes של שורש LoRA הראשון.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "מעביר את אחסון המתכונים...",
"noDefault": "אין ברירת מחדל"
},
"extraFolderPaths": {
"title": "נתיבי תיקיות נוספים",
"description": "נתיבי שורש מודלים נוספים בלעדיים ל-LoRA Manager. טען מודלים ממיקומים מחוץ לתיקיות הסטנדרטיות של ComfyUI - אידיאלי לספריות גדולות שאחרת יאטו את ComfyUI.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "נתיבי LoRA",
"checkpoint": "נתיבי Checkpoint",
"unet": "נתיבי מודל דיפוזיה",
"embedding": "נתיבי Embedding"
},
"pathPlaceholder": "/נתיב/למודלים/נוספים",
"saveSuccess": "נתיבי תיקיות נוספים עודכנו. נדרשת הפעלה מחדש כדי להחיל את השינויים.",
"saveError": "נכשל בעדכון נתיבי תיקיות נוספים: {message}",
"validation": {
"duplicatePath": "נתיב זה כבר מוגדר"
}
},
"priorityTags": {
"title": "תגיות עדיפות",
"description": "התאם את סדר העדיפות של התגיות עבור כל סוג מודל (לדוגמה: character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "הזן את נתיב התיקייה שבו יישמרו תמונות דוגמה מ-Civitai",
"autoDownload": "הורדה אוטומטית של תמונות דוגמה",
"autoDownloadHelp": "הורד אוטומטית תמונות דוגמה למודלים שאין להם (דורש הגדרת מיקום הורדה)",
"openMode": "פעולת פתיחת תמונות דוגמה",
"openModeHelp": "בחר אם הפעולה תיפתח בשרת, תעתיק נתיב מקומי ממופה או תפעיל URI מותאם אישית.",
"openModeOptions": {
"system": "פתח בשרת",
"clipboard": "העתק נתיב מקומי",
"uriTemplate": "פתח URI מותאם אישית"
},
"localRoot": "שורש מקומי לתמונות דוגמה",
"localRootHelp": "שורש מקומי או ממופה אופציונלי שמשקף את תיקיית תמונות הדוגמה בשרת. אם השדה ריק, ייעשה שימוש חוזר בנתיב השרת.",
"localRootPlaceholder": "דוגמה: /Volumes/ComfyUI/example_images",
"uriTemplate": "תבנית URI לפתיחה",
"uriTemplateHelp": "השתמש בקישור עומק מותאם אישית כמו URI של קובץ או קישור Shortcuts.",
"uriTemplatePlaceholder": "דוגמה: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "מצייני מקום זמינים: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "למידע נוסף על מצבי פתיחה מרחוק",
"optimizeImages": "מטב תמונות שהורדו",
"optimizeImagesHelp": "מטב תמונות דוגמה כדי להקטין את גודל הקובץ ולשפר את מהירות הטעינה (מטא-דאטה תישמר)",
"download": "הורד",
@@ -485,23 +619,6 @@
"proxyPassword": "סיסמה (אופציונלי)",
"proxyPasswordPlaceholder": "password",
"proxyPasswordHelp": "סיסמה לאימות מול הפרוקסי (אם נדרש)"
},
"extraFolderPaths": {
"title": "נתיבי תיקיות נוספים",
"help": "הוסף תיקיות מודלים נוספות מחוץ לנתיבים הסטנדרטיים של ComfyUI. נתיבים אלה נשמרים בנפרד ונסרקים לצד תיקיות ברירת המחדל.",
"description": "הגדר תיקיות נוספות לסריקת מודלים. נתיבים אלה ספציפיים ל-LoRA Manager וימוזגו עם נתיבי ברירת המחדל של ComfyUI.",
"modelTypes": {
"lora": "נתיבי LoRA",
"checkpoint": "נתיבי Checkpoint",
"unet": "נתיבי מודל דיפוזיה",
"embedding": "נתיבי Embedding"
},
"pathPlaceholder": "/נתיב/למודלים/נוספים",
"saveSuccess": "נתיבי תיקיות נוספים עודכנו.",
"saveError": "נכשל בעדכון נתיבי תיקיות נוספים: {message}",
"validation": {
"duplicatePath": "נתיב זה כבר מוגדר"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "ארגן אוטומטית נבחרים",
"skipMetadataRefresh": "דילוג על רענון מטא-נתונים לנבחרים",
"resumeMetadataRefresh": "המשך רענון מטא-נתונים לנבחרים",
"deleteAll": "מחק את כל המודלים",
"deleteAll": "מחק נבחרים",
"downloadMissingLoras": "הורדת LoRAs חסרים",
"clear": "נקה בחירה",
"skipMetadataRefreshCount": "דילוג({count} מודלים)",
"resumeMetadataRefreshCount": "המשך({count} מודלים)",
@@ -600,6 +718,7 @@
"moveToFolder": "העבר לתיקייה",
"repairMetadata": "תיקון מטא-דאטה",
"excludeModel": "החרג מודל",
"restoreModel": "שחזור מודל",
"deleteModel": "מחק מודל",
"shareRecipe": "שתף מתכון",
"viewAllLoras": "הצג את כל ה-LoRAs",
@@ -641,6 +760,8 @@
"root": "שורש",
"browseFolders": "דפדף בתיקיות:",
"downloadAndSaveRecipe": "הורד ושמור מתכון",
"importRecipeOnly": "יבא רק מתכון",
"importAndDownload": "יבא והורד",
"downloadMissingLoras": "הורד LoRAs חסרים",
"saveRecipe": "שמור מתכון",
"loraCountInfo": "({existing}/{total} בספרייה)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "הכי פחות"
},
"refresh": {
"title": "רענן רשימת מתכונים"
"title": "רענן רשימת מתכונים",
"quick": "סנכרן שינויים",
"quickTooltip": "סנכרן שינויים - רענון מהיר ללא בניית מטמון מחדש",
"full": "בנה מטמון מחדש",
"fullTooltip": "בנה מטמון מחדש - סריקה מחדש מלאה של כל קבצי המתכונים"
},
"filteredByLora": "מסונן לפי LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "תיקון המתכון נכשל: {message}",
"missingId": "לא ניתן לתקן את המתכון: חסר מזהה מתכון"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "העבר לתיקיית {otherType}"
"moveToOtherTypeFolder": "העבר לתיקיית {otherType}",
"sendToWorkflow": "שלח ל-workflow"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "שחרר סרגל צד",
"switchToListView": "עבור לתצוגת רשימה",
"switchToTreeView": "תצוגת עץ",
"recursiveOn": "חיפוש בתיקיות משנה",
"recursiveOff": "חיפוש רק בתיקייה הנוכחית",
"recursiveOn": "כלול תיקיות משנה",
"recursiveOff": "רק התיקייה הנוכחית",
"recursiveUnavailable": "חיפוש רקורסיבי זמין רק בתצוגת עץ",
"collapseAllDisabled": "לא זמין בתצוגת רשימה",
"dragDrop": {
"unableToResolveRoot": "לא ניתן לקבוע את נתיב היעד להעברה.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "העברה אינה נתמכת עבור פריט זה.",
"createFolderHint": "שחרר כדי ליצור תיקייה חדשה",
"newFolderName": "שם תיקייה חדשה",
"folderNameHint": "הקש Enter לאישור, Escape לביטול",
"emptyFolderName": "אנא הזן שם תיקייה",
"invalidFolderName": "שם התיקייה מכיל תווים לא חוקיים",
"noDragState": "לא נמצאה פעולת גרירה ממתינה"
},
"empty": {
"noFolders": "לא נמצאו תיקיות",
"dragHint": "גרור פריטים לכאן כדי ליצור תיקיות"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "גישה מוקדמת",
"earlyAccessTooltip": "נדרשת גישה מוקדמת",
"inLibrary": "בספרייה",
"downloaded": "הורד",
"downloadedTooltip": "הורד בעבר, אך הוא אינו נמצא כרגע בספרייה שלך.",
"alreadyInLibrary": "כבר בספרייה",
"autoOrganizedPath": "[מאורגן אוטומטית לפי תבנית נתיב]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "עדכן מודל בסיס",
"cancel": "ביטול"
},
"bulkDownloadMissingLoras": {
"title": "הורדת LoRAs חסרים",
"message": "נמצאו {uniqueCount} LoRAs חסרים ייחודיים (מתוך {totalCount} בסך הכל במתכונים שנבחרו).",
"previewTitle": "LoRAs להורדה:",
"moreItems": "...ועוד {count}",
"note": "הקבצים יורדו באמצעות תבניות נתיב ברירת מחדל. זה עשוי לקחת זמן בהתאם למספר ה-LoRAs.",
"downloadButton": "הורד {count} LoRA(s)"
},
"exampleAccess": {
"title": "תמונות דוגמה מקומיות",
"message": "לא נמצאו תמונות דוגמה מקומיות למודל זה. אפשרויות צפייה:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "הצג ב-Civitai",
"viewOnCivitaiText": "הצג ב-Civitai",
"viewCreatorProfile": "הצג פרופיל יוצר",
"openFileLocation": "פתח מיקום קובץ"
"openFileLocation": "פתח מיקום קובץ",
"sendToWorkflow": "שלח ל-ComfyUI",
"sendToWorkflowText": "שלח ל-ComfyUI"
},
"openFileLocation": {
"success": "מיקום הקובץ נפתח בהצלחה",
@@ -964,6 +1170,9 @@
"copied": "הנתיב הועתק ללוח העריכה: {{path}}",
"clipboardFallback": "נתיב: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "לא ניתן לשלוח ל-ComfyUI: אין נתיב קובץ זמין"
},
"metadata": {
"version": "גרסה",
"fileName": "שם קובץ",
@@ -1000,6 +1209,8 @@
"cancel": "בטל עריכה",
"save": "שמור שינויים",
"addPlaceholder": "הקלד להוספה או לחץ על הצעות למטה",
"editWord": "עריכת מילת טריגר",
"editPlaceholder": "עריכת מילת טריגר",
"copyWord": "העתק מילת טריגר",
"deleteWord": "מחק מילת טריגר",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "בעוד {count} ימים"
},
"badges": {
"current": "גרסה נוכחית",
"current": "גרסה שנפתחה",
"currentTooltip": "זוהי הגרסה שממנה נפתח החלון הזה",
"inLibrary": "בספרייה",
"inLibraryTooltip": "גרסה זו קיימת בספרייה המקומית שלך",
"downloaded": "הורד",
"downloadedTooltip": "גרסה זו הורדה בעבר, אך אינה נמצאת כרגע בספרייה שלך",
"newer": "גרסה חדשה יותר",
"newerTooltip": "גרסה זו חדשה יותר מהגרסה המקומית האחרונה שלך",
"earlyAccess": "גישה מוקדמת",
"ignored": "התעלם"
"earlyAccessTooltip": "גרסה זו דורשת כרגע גישת Early Access של Civitai",
"ignored": "התעלם",
"ignoredTooltip": "התראות העדכון מושבתות עבור גרסה זו",
"onSiteOnly": "רק באתר",
"onSiteOnlyTooltip": "גרסה זו זמינה רק ליצירה באתר Civitai"
},
"actions": {
"download": "הורדה",
"downloadTooltip": "הורד את הגרסה הזו",
"downloadEarlyAccessTooltip": "הורד את גרסת ה-Early Access הזו מ-Civitai",
"downloadNotAllowedTooltip": "גרסה זו זמינה רק ליצירה באתר Civitai",
"delete": "מחיקה",
"deleteTooltip": "מחק את הגרסה המקומית הזו",
"ignore": "התעלם",
"unignore": "בטל התעלמות",
"ignoreTooltip": "התעלם מהתראות העדכון עבור גרסה זו",
"unignoreTooltip": "חזור לקבל התראות עדכון עבור גרסה זו",
"viewVersionOnCivitai": "הצג את הגרסה ב-Civitai",
"earlyAccessTooltip": "נדרש רכישת גישה מוקדמת",
"resumeModelUpdates": "המשך עדכונים עבור מודל זה",
"ignoreModelUpdates": "התעלם מעדכונים עבור מודל זה",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "מתכון הוחלף ב-workflow",
"recipeFailedToSend": "שליחת מתכון ל-workflow נכשלה",
"noMatchingNodes": "אין צמתים תואמים זמינים ב-workflow הנוכחי",
"noTargetNodeSelected": "לא נבחר צומת יעד"
"noTargetNodeSelected": "לא נבחר צומת יעד",
"modelUpdated": "מודל עודכן ב-workflow",
"modelFailed": "עדכון צומת המודל נכשל"
},
"nodeSelector": {
"recipe": "מתכון",
@@ -1235,6 +1464,10 @@
"opened": "תיקיית תמונות הדוגמה נפתחה",
"openingFolder": "פותח תיקיית תמונות דוגמה",
"failedToOpen": "פתיחת תיקיית תמונות הדוגמה נכשלה",
"copiedPath": "הנתיב הועתק ללוח: {{path}}",
"clipboardFallback": "נתיב: {{path}}",
"copiedUri": "הקישור הועתק ללוח: {{uri}}",
"uriClipboardFallback": "קישור: {{uri}}",
"setupRequired": "אחסון תמונות דוגמה",
"setupDescription": "כדי להוסיף תמונות דוגמה מותאמות אישית, עליך קודם להגדיר מיקום הורדה.",
"setupUsage": "נתיב זה משמש הן עבור תמונות דוגמה שהורדו והן עבור תמונות מותאמות אישית.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "הצג קוד QR של WeChat",
"hideWechatQR": "הסתר קוד QR של WeChat"
},
"footer": "תודה על השימוש במנהל LoRA! ❤️"
"footer": "תודה על השימוש במנהל LoRA! ❤️",
"supporters": {
"title": "תודה לכל התומכים",
"subtitle": "תודה ל־{count} תומכים שהפכו את הפרויקט הזה לאפשרי",
"specialThanks": "תודה מיוחדת",
"allSupporters": "כל התומכים",
"totalCount": "{count} תומכים בסך הכל"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "אנא בחר גרסה",
"versionExists": "גרסה זו כבר קיימת בספרייה שלך",
"downloadCompleted": "ההורדה הושלמה בהצלחה",
"downloadSkippedByBaseModel": "ההורדה דולגה כי מודל הבסיס {baseModel} מוחרג",
"autoOrganizeSuccess": "הארגון האוטומטי הושלם בהצלחה עבור {count} {type}",
"autoOrganizePartialSuccess": "הארגון האוטומטי הושלם עם {success} שהועברו, {failures} שנכשלו מתוך {total} מודלים",
"autoOrganizeFailed": "הארגון האוטומטי נכשל: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "טעינת {modelType}s נכשלה: {message}",
"refreshComplete": "הרענון הושלם",
"refreshFailed": "רענון המתכונים נכשל: {message}",
"syncComplete": "הסנכרון הושלם",
"syncFailed": "סנכרון המתכונים נכשל: {message}",
"updateFailed": "עדכון המתכון נכשל: {error}",
"updateError": "שגיאה בעדכון המתכון: {message}",
"nameSaved": "המתכון \"{name}\" נשמר בהצלחה",
"nameUpdated": "שם המתכון עודכן בהצלחה",
"tagsUpdated": "תגיות המתכון עודכנו בהצלחה",
"sourceUrlUpdated": "כתובת ה-URL המקורית עודכנה בהצלחה",
"promptUpdated": "הפרומפט עודכן בהצלחה",
"negativePromptUpdated": "הפרומפט השלילי עודכן בהצלחה",
"promptEditorHint": "לחץ Enter לשמירה, Shift+Enter לשורה חדשה",
"noRecipeId": "אין מזהה מתכון זמין",
"sendToWorkflowFailed": "נכשל שליחת המתכון ל-workflow: {message}",
"copyFailed": "שגיאה בהעתקת תחביר המתכון: {message}",
"noMissingLoras": "אין LoRAs חסרים להורדה",
"missingLorasInfoFailed": "קבלת מידע עבור LoRAs חסרים נכשלה",
@@ -1410,9 +1657,20 @@
"processingError": "שגיאת עיבוד: {message}",
"folderBrowserError": "שגיאה בטעינת דפדפן התיקיות: {message}",
"recipeSaveFailed": "שמירת המתכון נכשלה: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "הייבוא נכשל: {message}",
"folderTreeFailed": "טעינת עץ התיקיות נכשלה",
"folderTreeError": "שגיאה בטעינת עץ התיקיות"
"folderTreeError": "שגיאה בטעינת עץ התיקיות",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "לא נבחרו מתכונים",
"noMissingLorasInSelection": "לא נמצאו LoRAs חסרים במתכונים שנבחרו",
"noLoraRootConfigured": "תיקיית השורש של LoRA לא מוגדרת. אנא הגדר תיקיית שורש LoRA ברירת מחדל בהגדרות."
},
"models": {
"noModelsSelected": "לא נבחרו מודלים",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "שמירת מיפויי מודל בסיס נכשלה: {message}",
"downloadTemplatesUpdated": "תבניות נתיב הורדה עודכנו",
"downloadTemplatesFailed": "שמירת תבניות נתיב הורדה נכשלה: {message}",
"recipesPathUpdated": "נתיב אחסון המתכונים עודכן",
"recipesPathSaveFailed": "עדכון נתיב אחסון המתכונים נכשל: {message}",
"settingsUpdated": "הגדרות עודכנו: {setting}",
"compactModeToggled": "מצב קומפקטי {state}",
"settingSaveFailed": "שמירת ההגדרה נכשלה: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "לא ניתן היה לטעון מילים מאומנות",
"tooLong": "מילת טריגר לא תעלה על 100 מילים",
"tooMany": "מותרות עד 30 מילות טריגר",
"tooLong": "מילת טריגר לא תעלה על 500 מילים",
"tooMany": "מותרות עד 100 מילות טריגר",
"alreadyExists": "מילת טריגר זו כבר קיימת",
"updateSuccess": "מילות הטריגר עודכנו בהצלחה",
"updateFailed": "עדכון מילות הטריגר נכשל",
@@ -1591,6 +1851,8 @@
"deleteFailed": "מחיקת {type} נכשלה: {message}",
"excludeSuccess": "{type} הוחרג בהצלחה",
"excludeFailed": "החרגת {type} נכשלה: {message}",
"restoreSuccess": "{type} שוחזר בהצלחה",
"restoreFailed": "שחזור {type} נכשל: {message}",
"fileNameUpdated": "שם הקובץ עודכן בהצלחה",
"fileRenameFailed": "שינוי שם הקובץ נכשל: {error}",
"previewUpdated": "התצוגה המקדימה עודכנה בהצלחה",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "אבחון מערכת",
"title": "דוקטור",
"buttonTitle": "הפעלת אבחון ותיקונים נפוצים",
"loading": "בודק את הסביבה...",
"footer": "ייצא חבילת אבחון אם הבעיה עדיין נמשכת לאחר התיקון.",
"summary": {
"idle": "הרץ בדיקת תקינות עבור הגדרות, שלמות המטמון ועקביות הממשק.",
"ok": "לא נמצאו בעיות פעילות בסביבה הנוכחית.",
"warning": "נמצאה/נמצאו {count} בעיה/בעיות. את רובן אפשר לתקן ישירות מלוח זה.",
"error": "יש לטפל ב-{count} בעיה/בעיות לפני שהאפליקציה תהיה תקינה לחלוטין."
},
"status": {
"ok": "תקין",
"warning": "דורש תשומת לב",
"error": "נדרשת פעולה"
},
"actions": {
"runAgain": "הפעל שוב",
"exportBundle": "ייצוא חבילה"
},
"toast": {
"loadFailed": "טעינת האבחון נכשלה: {message}",
"repairSuccess": "בניית המטמון מחדש הושלמה.",
"repairFailed": "בניית המטמון מחדש נכשלה: {message}",
"exportSuccess": "חבילת האבחון יוצאה.",
"exportFailed": "ייצוא חבילת האבחון נכשל: {message}",
"conflictsResolved": "נפתרו {count} התנגשויות בשמות קבצים.",
"conflictsResolveFailed": "פתרון התנגשויות שמות קבצים נכשל: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "זוהה עדכון יישום",

View File

@@ -1,17 +1,22 @@
{
"common": {
"cancel": "キャンセル",
"confirm": "確認",
"actions": {
"save": "保存",
"cancel": "キャンセル",
"confirm": "確認",
"delete": "削除",
"move": "移動",
"refresh": "更新",
"back": "戻る",
"next": "次へ",
"backToTop": "トップ戻る",
"backToTop": "トップ戻る",
"settings": "設定",
"help": "ヘルプ",
"add": "追加"
"add": "追加",
"close": "閉じる",
"menu": "メニュー"
},
"status": {
"loading": "読み込み中...",
@@ -171,6 +176,9 @@
"success": "{count} 件のレシピを正常に修復しました。",
"cancelled": "修復がキャンセルされました。{count}個のレシピが修復されました。",
"error": "レシピの修復に失敗しました: {message}"
},
"manageExcludedModels": {
"label": "除外モデルを管理"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "プリセット「{name}」は既に存在します。上書きしますか?",
"presetNamePlaceholder": "プリセット名...",
"baseModel": "ベースモデル",
"baseModelSearchPlaceholder": "ベースモデルを検索...",
"modelTags": "タグ上位20",
"modelTypes": "Model Types",
"modelTypes": "モデルタイプ",
"license": "ライセンス",
"noCreditRequired": "クレジット不要",
"allowSellingGeneratedContent": "販売許可",
"noTags": "タグなし",
"noBaseModelMatches": "現在の検索に一致するベースモデルはありません。",
"clearAll": "すべてのフィルタをクリア",
"any": "いずれか",
"all": "すべて",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai APIキー",
"civitaiApiKeyPlaceholder": "Civitai APIキーを入力してください",
"civitaiApiKeyHelp": "Civitaiからモデルをダウンロードするときの認証に使用されます",
"civitaiHost": {
"label": "Civitai ホスト",
"help": "「View on Civitai」リンクを使うときに開く Civitai サイトを選択します。",
"options": {
"com": "civitai.comSFW のみ)",
"red": "civitai.red制限なし"
}
},
"downloadBackend": {
"label": "ダウンロードバックエンド",
"help": "モデルファイルのダウンロード方法を選択します。Python は内蔵ダウンローダーを使用し、aria2 は実験的な外部ダウンローダープロセスを使用します。",
"options": {
"python": "Python内蔵",
"aria2": "aria2実験的"
}
},
"aria2cPath": {
"label": "aria2c のパス",
"help": "aria2c 実行ファイルへの任意のパスです。空欄のままにすると、システム PATH 上の aria2c を使用します。",
"placeholder": "空欄のままにすると PATH 上の aria2c を使用します"
},
"aria2HelpLink": "aria2 ダウンロードバックエンドの設定方法",
"civitaiHostBanner": {
"title": "Civitai ホスト設定を利用できます",
"content": "Civitai は現在、SFW コンテンツには civitai.com、制限なしコンテンツには civitai.red を使用しています。設定で既定で開くサイトを変更できます。",
"openSettings": "設定を開く"
},
"openSettingsFileLocation": {
"label": "設定フォルダーを開く",
"tooltip": "settings.json を含むフォルダーを開きます",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "コンテンツフィルタリング",
"downloads": "ダウンロード",
"videoSettings": "動画設定",
"layoutSettings": "レイアウト設定",
"misc": "その他",
"backup": "バックアップ",
"folderSettings": "デフォルトルート",
"recipeSettings": "レシピ",
"extraFolderPaths": "追加フォルダーパス",
"downloadPathTemplates": "ダウンロードパステンプレート",
"priorityTags": "優先タグ",
@@ -287,7 +327,15 @@
"blurNsfwContent": "NSFWコンテンツをぼかす",
"blurNsfwContentHelp": "成人向けNSFWコンテンツのプレビュー画像をぼかします",
"showOnlySfw": "SFWコンテンツのみ表示",
"showOnlySfwHelp": "閲覧と検索時にすべてのNSFWコンテンツを除外します"
"showOnlySfwHelp": "閲覧と検索時にすべてのNSFWコンテンツを除外します",
"matureBlurThreshold": "成人コンテンツぼかし閾値",
"matureBlurThresholdHelp": "NSFWぼかしが有効な場合、どのレーティングレベルからぼかしフィルタリングを開始するかを設定します。",
"matureBlurThresholdOptions": {
"pg13": "PG13 以上",
"r": "R 以上(デフォルト)",
"x": "X 以上",
"xxx": "XXX のみ"
}
},
"videoSettings": {
"autoplayOnHover": "ホバー時に動画を自動再生",
@@ -311,6 +359,54 @@
"saveFailed": "スキップパスの保存に失敗しました:{message}"
}
},
"backup": {
"autoEnabled": "自動バックアップ",
"autoEnabledHelp": "1日1回ローカルのスナップショットを作成し、保持ポリシーに従って最新のものを残します。",
"retention": "保持数",
"retentionHelp": "古いものを削除する前に、何件の自動スナップショットを保持するかを指定します。",
"management": "バックアップ管理",
"managementHelp": "現在のユーザー状態をエクスポートするか、バックアップアーカイブから復元します。",
"scopeHelp": "設定、ダウンロード履歴、モデル更新の状態をバックアップします。モデルファイルや再生成できるキャッシュは含まれません。",
"locationSummary": "現在のバックアップ場所",
"openFolderButton": "バックアップフォルダを開く",
"openFolderSuccess": "バックアップフォルダを開きました",
"openFolderFailed": "バックアップフォルダを開けませんでした",
"locationCopied": "バックアップパスをクリップボードにコピーしました: {{path}}",
"locationClipboardFallback": "バックアップパス: {{path}}",
"exportButton": "バックアップをエクスポート",
"exportSuccess": "バックアップを正常にエクスポートしました。",
"exportFailed": "バックアップのエクスポートに失敗しました: {message}",
"importButton": "バックアップをインポート",
"importConfirm": "このバックアップをインポートして、ローカルのユーザー状態を上書きしますか?",
"importSuccess": "バックアップを正常にインポートしました。",
"importFailed": "バックアップのインポートに失敗しました: {message}",
"latestSnapshot": "最新のスナップショット",
"latestAutoSnapshot": "最新の自動スナップショット",
"snapshotCount": "保存済みスナップショット",
"noneAvailable": "まだスナップショットはありません"
},
"downloadSkipBaseModels": {
"label": "ベースモデルのダウンロードをスキップ",
"help": "すべてのダウンロードフローに適用されます。ここでは対応しているベースモデルのみ選択できます。",
"searchPlaceholder": "ベースモデルを絞り込む...",
"empty": "現在の検索に一致するベースモデルはありません。",
"summary": {
"none": "未選択",
"count": "{count} 件を選択"
},
"actions": {
"edit": "編集",
"collapse": "折りたたむ",
"clear": "クリア"
},
"validation": {
"saveFailed": "除外するベースモデルを保存できませんでした: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "以前にダウンロードしたモデルバージョンをスキップ",
"help": "有効にすると、ダウンロード履歴サービスがそのバージョンが既にダウンロード済みと記録している場合、LoRA Managerはそのモデルバージョンのダウンロードをスキップします。すべてのダウンロードフローに適用されます。"
},
"layoutSettings": {
"displayDensity": "表示密度",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "ホバー時に表示"
},
"cardInfoDisplayHelp": "モデル情報とアクションボタンの表示タイミングを選択",
"showVersionOnCard": "カードにバージョンを表示",
"showVersionOnCardHelp": "モデルカード上のバージョン名の表示/非表示を切り替えます",
"modelCardFooterAction": "モデルカードボタンのアクション",
"modelCardFooterActionOptions": {
"exampleImages": "例画像を開く",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "ダウンロード、インポート、移動用のデフォルトDiffusion Model (UNET)ルートディレクトリを設定",
"defaultEmbeddingRoot": "Embeddingルート",
"defaultEmbeddingRootHelp": "ダウンロード、インポート、移動用のデフォルトembeddingルートディレクトリを設定",
"recipesPath": "レシピ保存先",
"recipesPathHelp": "保存済みレシピ用の任意のカスタムディレクトリです。空欄にすると最初のLoRAルートのrecipesフォルダーを使用します。",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "レシピ保存先を移動中...",
"noDefault": "デフォルトなし"
},
"extraFolderPaths": {
"title": "追加フォルダーパス",
"description": "LoRA Manager専用の追加モデルルートパス。ComfyUIの標準フォルダー外の場所からモデルを読み込みます。ComfyUIの動作を低下させる可能性のある大規模ライブラリに最適です。",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "LoRAパス",
"checkpoint": "Checkpointパス",
"unet": "Diffusionモデルパス",
"embedding": "Embeddingパス"
},
"pathPlaceholder": "/追加モデルへのパス",
"saveSuccess": "追加フォルダーパスを更新しました。変更を適用するには再起動が必要です。",
"saveError": "追加フォルダーパスの更新に失敗しました: {message}",
"validation": {
"duplicatePath": "このパスはすでに設定されています"
}
},
"priorityTags": {
"title": "優先タグ",
"description": "各モデルタイプのタグ優先順位をカスタマイズします (例: character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Civitaiからの例画像を保存するフォルダパスを入力してください",
"autoDownload": "例画像の自動ダウンロード",
"autoDownloadHelp": "例画像がないモデルの例画像を自動的にダウンロードします(ダウンロード場所の設定が必要)",
"openMode": "サンプル画像を開く動作",
"openModeHelp": "サーバー上で開くか、対応するローカルパスをコピーするか、カスタム URI を起動するかを選択します。",
"openModeOptions": {
"system": "サーバー上で開く",
"clipboard": "ローカルパスをコピー",
"uriTemplate": "カスタム URI を開く"
},
"localRoot": "ローカルのサンプル画像ルート",
"localRootHelp": "サーバーのサンプル画像ディレクトリを反映する任意のローカルまたはマウント済みルートです。空欄の場合はサーバーのパスを再利用します。",
"localRootPlaceholder": "例: /Volumes/ComfyUI/example_images",
"uriTemplate": "URI テンプレートを開く",
"uriTemplateHelp": "ファイル URI や Shortcuts リンクなどのカスタムディープリンクを使用します。",
"uriTemplatePlaceholder": "例: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "使用可能なプレースホルダー: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "リモートオープンモードの詳細",
"optimizeImages": "ダウンロード画像の最適化",
"optimizeImagesHelp": "例画像を最適化してファイルサイズを縮小し、読み込み速度を向上させます(メタデータは保持されます)",
"download": "ダウンロード",
@@ -485,23 +619,6 @@
"proxyPassword": "パスワード(任意)",
"proxyPasswordPlaceholder": "パスワード",
"proxyPasswordHelp": "プロキシ認証用のパスワード(必要な場合)"
},
"extraFolderPaths": {
"title": "追加フォルダーパス",
"help": "ComfyUIの標準パスの外部に追加のモデルフォルダを追加します。これらのパスは別々に保存され、デフォルトのフォルダと一緒にスキャンされます。",
"description": "モデルをスキャンするための追加フォルダを設定します。これらのパスはLoRA Manager固有であり、ComfyUIのデフォルトパスとマージされます。",
"modelTypes": {
"lora": "LoRAパス",
"checkpoint": "Checkpointパス",
"unet": "Diffusionモデルパス",
"embedding": "Embeddingパス"
},
"pathPlaceholder": "/追加モデルへのパス",
"saveSuccess": "追加フォルダーパスを更新しました。",
"saveError": "追加フォルダーパスの更新に失敗しました: {message}",
"validation": {
"duplicatePath": "このパスはすでに設定されています"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "自動整理を実行",
"skipMetadataRefresh": "選択したモデルのメタデータ更新をスキップ",
"resumeMetadataRefresh": "選択したモデルのメタデータ更新を再開",
"deleteAll": "すべてのモデルを削除",
"deleteAll": "選択したものを削除",
"downloadMissingLoras": "不足している LoRA をダウンロード",
"clear": "選択をクリア",
"skipMetadataRefreshCount": "スキップ({count}モデル)",
"resumeMetadataRefreshCount": "再開({count}モデル)",
@@ -600,6 +718,7 @@
"moveToFolder": "フォルダに移動",
"repairMetadata": "メタデータを修復",
"excludeModel": "モデルを除外",
"restoreModel": "モデルを復元",
"deleteModel": "モデルを削除",
"shareRecipe": "レシピを共有",
"viewAllLoras": "すべてのLoRAを表示",
@@ -641,6 +760,8 @@
"root": "ルート",
"browseFolders": "フォルダを参照:",
"downloadAndSaveRecipe": "ダウンロード & レシピ保存",
"importRecipeOnly": "レシピのみインポート",
"importAndDownload": "インポートとダウンロード",
"downloadMissingLoras": "不足しているLoRAをダウンロード",
"saveRecipe": "レシピを保存",
"loraCountInfo": "{existing}/{total} ライブラリ内)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "少ない順"
},
"refresh": {
"title": "レシピリストを更新"
"title": "レシピリストを更新",
"quick": "変更を同期",
"quickTooltip": "変更を同期 - キャッシュを再構築せずにクイック更新",
"full": "キャッシュを再構築",
"fullTooltip": "キャッシュを再構築 - すべてのレシピファイルを完全に再スキャン"
},
"filteredByLora": "LoRAでフィルタ済み",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "レシピの修復に失敗しました: {message}",
"missingId": "レシピを修復できません: レシピIDがありません"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "{otherType} フォルダに移動"
"moveToOtherTypeFolder": "{otherType} フォルダに移動",
"sendToWorkflow": "ワークフローに送信"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "サイドバーの固定を解除",
"switchToListView": "リストビューに切り替え",
"switchToTreeView": "ツリー表示に切り替え",
"recursiveOn": "サブフォルダーを検索",
"recursiveOff": "現在のフォルダーのみを検索",
"recursiveOn": "サブフォルダーを含める",
"recursiveOff": "現在のフォルダーのみ",
"recursiveUnavailable": "再帰検索はツリービューでのみ利用できます",
"collapseAllDisabled": "リストビューでは利用できません",
"dragDrop": {
"unableToResolveRoot": "移動先のパスを特定できません。",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "この項目の移動はサポートされていません。",
"createFolderHint": "放して新しいフォルダを作成",
"newFolderName": "新しいフォルダ名",
"folderNameHint": "Enterで確定、Escでキャンセル",
"emptyFolderName": "フォルダ名を入力してください",
"invalidFolderName": "フォルダ名に無効な文字が含まれています",
"noDragState": "保留中のドラッグ操作が見つかりません"
},
"empty": {
"noFolders": "フォルダが見つかりません",
"dragHint": "ここへアイテムをドラッグしてフォルダを作成します"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "アーリーアクセス",
"earlyAccessTooltip": "アーリーアクセスが必要",
"inLibrary": "ライブラリ内",
"downloaded": "ダウンロード済み",
"downloadedTooltip": "以前にダウンロード済みですが、現在はライブラリにありません。",
"alreadyInLibrary": "既にライブラリ内",
"autoOrganizedPath": "[パステンプレートによる自動整理]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "ベースモデルを更新",
"cancel": "キャンセル"
},
"bulkDownloadMissingLoras": {
"title": "不足している LoRA をダウンロード",
"message": "選択したレシピから合計 {totalCount} 個中 {uniqueCount} 個のユニークな不足している LoRA が見つかりました。",
"previewTitle": "ダウンロードする LoRA:",
"moreItems": "...あと {count} 個",
"note": "ファイルはデフォルトのパステンプレートを使用してダウンロードされます。LoRA の数によっては時間がかかる場合があります。",
"downloadButton": "{count} 個の LoRA をダウンロード"
},
"exampleAccess": {
"title": "ローカル例画像",
"message": "このモデルのローカル例画像が見つかりませんでした。表示オプション:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Civitaiで表示",
"viewOnCivitaiText": "Civitaiで表示",
"viewCreatorProfile": "作成者プロフィールを表示",
"openFileLocation": "ファイルの場所を開く"
"openFileLocation": "ファイルの場所を開く",
"sendToWorkflow": "ComfyUI に送信",
"sendToWorkflowText": "ComfyUI に送信"
},
"openFileLocation": {
"success": "ファイルの場所を正常に開きました",
@@ -964,6 +1170,9 @@
"copied": "パスをクリップボードにコピーしました: {{path}}",
"clipboardFallback": "パス: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "ComfyUI に送信できません:ファイルパスがありません"
},
"metadata": {
"version": "バージョン",
"fileName": "ファイル名",
@@ -1000,6 +1209,8 @@
"cancel": "編集をキャンセル",
"save": "変更を保存",
"addPlaceholder": "入力して追加するか、下の提案をクリック",
"editWord": "トリガーワードを編集",
"editPlaceholder": "トリガーワードを編集",
"copyWord": "トリガーワードをコピー",
"deleteWord": "トリガーワードを削除",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "{count}日後"
},
"badges": {
"current": "現在のバージョン",
"current": "開いたバージョン",
"currentTooltip": "このモーダルを開くために選択したバージョンです",
"inLibrary": "ライブラリにあります",
"inLibraryTooltip": "このバージョンはローカルライブラリに存在します",
"downloaded": "ダウンロード済み",
"downloadedTooltip": "このバージョンは以前ダウンロードされましたが、現在はライブラリにありません",
"newer": "新しいバージョン",
"newerTooltip": "このバージョンはローカルの最新バージョンより新しいです",
"earlyAccess": "早期アクセス",
"ignored": "無視中"
"earlyAccessTooltip": "このバージョンは現在 Civitai の早期アクセスが必要です",
"ignored": "無視中",
"ignoredTooltip": "このバージョンの更新通知は無効です",
"onSiteOnly": "サイト内のみ",
"onSiteOnlyTooltip": "このバージョンはCivitaiサイト内でのみ利用可能で、ダウンロードはできません"
},
"actions": {
"download": "ダウンロード",
"downloadTooltip": "このバージョンをダウンロード",
"downloadEarlyAccessTooltip": "Civitai からこの早期アクセス版をダウンロード",
"downloadNotAllowedTooltip": "このバージョンはCivitaiサイト内でのみ利用可能で、ダウンロードはできません",
"delete": "削除",
"deleteTooltip": "このローカルバージョンを削除",
"ignore": "無視",
"unignore": "無視を解除",
"ignoreTooltip": "このバージョンの更新通知を無視",
"unignoreTooltip": "このバージョンの更新通知を再開",
"viewVersionOnCivitai": "Civitai でバージョンを表示",
"earlyAccessTooltip": "早期アクセス購入が必要",
"resumeModelUpdates": "このモデルの更新を再開",
"ignoreModelUpdates": "このモデルの更新を無視",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "レシピがワークフローで置換されました",
"recipeFailedToSend": "レシピをワークフローに送信できませんでした",
"noMatchingNodes": "現在のワークフローには互換性のあるノードがありません",
"noTargetNodeSelected": "ターゲットノードが選択されていません"
"noTargetNodeSelected": "ターゲットノードが選択されていません",
"modelUpdated": "モデルがワークフローで更新されました",
"modelFailed": "モデルノードの更新に失敗しました"
},
"nodeSelector": {
"recipe": "レシピ",
@@ -1235,6 +1464,10 @@
"opened": "例画像フォルダが開かれました",
"openingFolder": "例画像フォルダを開いています",
"failedToOpen": "例画像フォルダを開くのに失敗しました",
"copiedPath": "パスをクリップボードにコピーしました: {{path}}",
"clipboardFallback": "パス: {{path}}",
"copiedUri": "リンクをクリップボードにコピーしました: {{uri}}",
"uriClipboardFallback": "リンク: {{uri}}",
"setupRequired": "例画像ストレージ",
"setupDescription": "カスタム例画像を追加するには、まずダウンロード場所を設定する必要があります。",
"setupUsage": "このパスは、ダウンロードした例画像とカスタム画像の両方に使用されます。",
@@ -1342,7 +1575,14 @@
"showWechatQR": "WeChat QRコードを表示",
"hideWechatQR": "WeChat QRコードを非表示"
},
"footer": "LoRA Managerをご利用いただきありがとうございます ❤️"
"footer": "LoRA Managerをご利用いただきありがとうございます ❤️",
"supporters": {
"title": "サポーターの皆様に感謝",
"subtitle": "{count} 名のサポーターの皆様に、このプロジェクトを実現していただきありがとうございます",
"specialThanks": "特別感謝",
"allSupporters": "全サポーター",
"totalCount": "サポーター {count} 名"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "バージョンを選択してください",
"versionExists": "このバージョンは既にライブラリに存在します",
"downloadCompleted": "ダウンロードが正常に完了しました",
"downloadSkippedByBaseModel": "ベースモデル {baseModel} が除外されているため、ダウンロードをスキップしました",
"autoOrganizeSuccess": "{count} {type} の自動整理が正常に完了しました",
"autoOrganizePartialSuccess": "自動整理が完了しました:{total} モデル中 {success} 移動、{failures} 失敗",
"autoOrganizeFailed": "自動整理に失敗しました:{error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "{modelType}の読み込みに失敗しました:{message}",
"refreshComplete": "更新完了",
"refreshFailed": "レシピの更新に失敗しました:{message}",
"syncComplete": "同期完了",
"syncFailed": "レシピの同期に失敗しました:{message}",
"updateFailed": "レシピの更新に失敗しました:{error}",
"updateError": "レシピ更新エラー:{message}",
"nameSaved": "レシピ\"{name}\"が正常に保存されました",
"nameUpdated": "レシピ名が正常に更新されました",
"tagsUpdated": "レシピタグが正常に更新されました",
"sourceUrlUpdated": "ソースURLが正常に更新されました",
"promptUpdated": "プロンプトが正常に更新されました",
"negativePromptUpdated": "ネガティブプロンプトが正常に更新されました",
"promptEditorHint": "Enterキーで保存、Shift+Enterで改行",
"noRecipeId": "レシピIDが利用できません",
"sendToWorkflowFailed": "ワークフローへのレシピ送信に失敗しました:{message}",
"copyFailed": "レシピ構文のコピーエラー:{message}",
"noMissingLoras": "ダウンロードする不足LoRAがありません",
"missingLorasInfoFailed": "不足LoRAの情報取得に失敗しました",
@@ -1410,9 +1657,20 @@
"processingError": "処理エラー:{message}",
"folderBrowserError": "フォルダブラウザの読み込みエラー:{message}",
"recipeSaveFailed": "レシピの保存に失敗しました:{error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "インポートに失敗しました:{message}",
"folderTreeFailed": "フォルダツリーの読み込みに失敗しました",
"folderTreeError": "フォルダツリー読み込みエラー"
"folderTreeError": "フォルダツリー読み込みエラー",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "レシピが選択されていません",
"noMissingLorasInSelection": "選択したレシピに不足している LoRA が見つかりませんでした",
"noLoraRootConfigured": "LoRA ルートディレクトリが設定されていません。設定でデフォルトの LoRA ルートを設定してください。"
},
"models": {
"noModelsSelected": "モデルが選択されていません",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "ベースモデルマッピングの保存に失敗しました:{message}",
"downloadTemplatesUpdated": "ダウンロードパステンプレートが更新されました",
"downloadTemplatesFailed": "ダウンロードパステンプレートの保存に失敗しました:{message}",
"recipesPathUpdated": "レシピ保存先を更新しました",
"recipesPathSaveFailed": "レシピ保存先の更新に失敗しました: {message}",
"settingsUpdated": "設定が更新されました:{setting}",
"compactModeToggled": "コンパクトモード {state}",
"settingSaveFailed": "設定の保存に失敗しました:{message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "学習済みワードを読み込めませんでした",
"tooLong": "トリガーワードは100ワードを超えてはいけません",
"tooMany": "最大30トリガーワードまで許可されています",
"tooLong": "トリガーワードは500ワードを超えてはいけません",
"tooMany": "最大100トリガーワードまで許可されています",
"alreadyExists": "このトリガーワードは既に存在します",
"updateSuccess": "トリガーワードが正常に更新されました",
"updateFailed": "トリガーワードの更新に失敗しました",
@@ -1591,6 +1851,8 @@
"deleteFailed": "{type}の削除に失敗しました:{message}",
"excludeSuccess": "{type}が正常に除外されました",
"excludeFailed": "{type}の除外に失敗しました:{message}",
"restoreSuccess": "{type}を復元しました",
"restoreFailed": "{type}の復元に失敗しました: {message}",
"fileNameUpdated": "ファイル名が正常に更新されました",
"fileRenameFailed": "ファイル名の変更に失敗しました:{error}",
"previewUpdated": "プレビューが正常に更新されました",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "システム診断",
"title": "ドクター",
"buttonTitle": "診断と一般的な修復を実行",
"loading": "環境を確認中...",
"footer": "修復後も問題が続く場合は、診断パッケージをエクスポートしてください。",
"summary": {
"idle": "設定、キャッシュ整合性、UI の一貫性をヘルスチェックします。",
"ok": "現在の環境でアクティブな問題は見つかりませんでした。",
"warning": "{count} 件の問題が見つかりました。ほとんどはこのパネルから直接修復できます。",
"error": "アプリが完全に正常になる前に、{count} 件の問題に対処する必要があります。"
},
"status": {
"ok": "正常",
"warning": "要注意",
"error": "対応が必要"
},
"actions": {
"runAgain": "再実行",
"exportBundle": "パッケージをエクスポート"
},
"toast": {
"loadFailed": "診断の読み込みに失敗しました: {message}",
"repairSuccess": "キャッシュの再構築が完了しました。",
"repairFailed": "キャッシュの再構築に失敗しました: {message}",
"exportSuccess": "診断パッケージをエクスポートしました。",
"exportFailed": "診断パッケージのエクスポートに失敗しました: {message}",
"conflictsResolved": "{count} 件のファイル名競合が解決されました。",
"conflictsResolveFailed": "ファイル名競合の解決に失敗しました: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "アプリケーション更新が検出されました",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "취소",
"confirm": "확인",
"actions": {
"save": "저장",
"cancel": "취소",
"confirm": "확인",
"delete": "삭제",
"move": "이동",
"refresh": "새로고침",
@@ -11,7 +14,9 @@
"backToTop": "맨 위로",
"settings": "설정",
"help": "도움말",
"add": "추가"
"add": "추가",
"close": "닫기",
"menu": "메뉴"
},
"status": {
"loading": "로딩 중...",
@@ -171,6 +176,9 @@
"success": "{count}개의 레시피가 성공적으로 복구되었습니다.",
"cancelled": "수리가 취소되었습니다. {count}개의 레시피가 수리되었습니다.",
"error": "레시피 복구 실패: {message}"
},
"manageExcludedModels": {
"label": "제외된 모델 관리"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "프리셋 \"{name}\"이(가) 이미 존재합니다. 덮어쓰시겠습니까?",
"presetNamePlaceholder": "프리셋 이름...",
"baseModel": "베이스 모델",
"baseModelSearchPlaceholder": "베이스 모델 검색...",
"modelTags": "태그 (상위 20개)",
"modelTypes": "Model Types",
"modelTypes": "모델 유형",
"license": "라이선스",
"noCreditRequired": "크레딧 표기 없음",
"allowSellingGeneratedContent": "판매 허용",
"noTags": "태그 없음",
"noBaseModelMatches": "현재 검색과 일치하는 베이스 모델이 없습니다.",
"clearAll": "모든 필터 지우기",
"any": "아무",
"all": "모두",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai API 키",
"civitaiApiKeyPlaceholder": "Civitai API 키를 입력하세요",
"civitaiApiKeyHelp": "Civitai에서 모델을 다운로드할 때 인증에 사용됩니다",
"civitaiHost": {
"label": "Civitai 호스트",
"help": "\"View on Civitai\" 링크를 사용할 때 어떤 Civitai 사이트를 열지 선택합니다.",
"options": {
"com": "civitai.com(SFW 전용)",
"red": "civitai.red(무제한)"
}
},
"downloadBackend": {
"label": "다운로드 백엔드",
"help": "모델 파일을 다운로드하는 방식을 선택합니다. Python은 내장 다운로더를 사용하고, aria2는 실험적인 외부 다운로더 프로세스를 사용합니다.",
"options": {
"python": "Python(내장)",
"aria2": "aria2(실험적)"
}
},
"aria2cPath": {
"label": "aria2c 경로",
"help": "aria2c 실행 파일의 선택적 경로입니다. 비워 두면 시스템 PATH의 aria2c를 사용합니다.",
"placeholder": "비워 두면 PATH의 aria2c를 사용합니다"
},
"aria2HelpLink": "aria2 다운로드 백엔드 설정 방법 알아보기",
"civitaiHostBanner": {
"title": "Civitai 호스트 기본 설정 사용 가능",
"content": "이제 Civitai는 SFW 콘텐츠에 civitai.com을, 무제한 콘텐츠에 civitai.red를 사용합니다. 설정에서 기본으로 열 사이트를 변경할 수 있습니다.",
"openSettings": "설정 열기"
},
"openSettingsFileLocation": {
"label": "설정 폴더 열기",
"tooltip": "settings.json이 있는 폴더를 엽니다",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "콘텐츠 필터링",
"downloads": "다운로드",
"videoSettings": "비디오 설정",
"layoutSettings": "레이아웃 설정",
"misc": "기타",
"backup": "백업",
"folderSettings": "기본 루트",
"recipeSettings": "레시피",
"extraFolderPaths": "추가 폴다 경로",
"downloadPathTemplates": "다운로드 경로 템플릿",
"priorityTags": "우선순위 태그",
@@ -287,7 +327,15 @@
"blurNsfwContent": "NSFW 콘텐츠 블러 처리",
"blurNsfwContentHelp": "성인(NSFW) 콘텐츠 미리보기 이미지를 블러 처리합니다",
"showOnlySfw": "SFW 결과만 표시",
"showOnlySfwHelp": "탐색 및 검색 시 모든 NSFW 콘텐츠를 필터링합니다"
"showOnlySfwHelp": "탐색 및 검색 시 모든 NSFW 콘텐츠를 필터링합니다",
"matureBlurThreshold": "성인 콘텐츠 블러 임계값",
"matureBlurThresholdHelp": "NSFW 블러가 활성화될 때 어떤 등급 레벨부터 블러 필터링을 시작할지 설정합니다.",
"matureBlurThresholdOptions": {
"pg13": "PG13 이상",
"r": "R 이상(기본값)",
"x": "X 이상",
"xxx": "XXX만"
}
},
"videoSettings": {
"autoplayOnHover": "호버 시 비디오 자동 재생",
@@ -311,6 +359,54 @@
"saveFailed": "건너뛰기 경로를 저장할 수 없습니다: {message}"
}
},
"backup": {
"autoEnabled": "자동 백업",
"autoEnabledHelp": "하루에 한 번 로컬 스냅샷을 만들고 보존 정책에 따라 최신 스냅샷을 유지합니다.",
"retention": "보존 개수",
"retentionHelp": "오래된 자동 스냅샷을 삭제하기 전에 몇 개를 유지할지 지정합니다.",
"management": "백업 관리",
"managementHelp": "현재 사용자 상태를 내보내거나 백업 아카이브에서 복원합니다.",
"scopeHelp": "설정, 다운로드 기록, 모델 업데이트 상태를 백업합니다. 모델 파일과 다시 생성할 수 있는 캐시는 포함되지 않습니다.",
"locationSummary": "현재 백업 위치",
"openFolderButton": "백업 폴더 열기",
"openFolderSuccess": "백업 폴더를 열었습니다",
"openFolderFailed": "백업 폴더를 열지 못했습니다",
"locationCopied": "백업 경로를 클립보드에 복사했습니다: {{path}}",
"locationClipboardFallback": "백업 경로: {{path}}",
"exportButton": "백업 내보내기",
"exportSuccess": "백업을 성공적으로 내보냈습니다.",
"exportFailed": "백업 내보내기에 실패했습니다: {message}",
"importButton": "백업 가져오기",
"importConfirm": "이 백업을 가져와서 로컬 사용자 상태를 덮어쓰시겠습니까?",
"importSuccess": "백업을 성공적으로 가져왔습니다.",
"importFailed": "백업 가져오기에 실패했습니다: {message}",
"latestSnapshot": "최근 스냅샷",
"latestAutoSnapshot": "최근 자동 스냅샷",
"snapshotCount": "저장된 스냅샷",
"noneAvailable": "아직 스냅샷이 없습니다"
},
"downloadSkipBaseModels": {
"label": "기본 모델 다운로드 건너뛰기",
"help": "모든 다운로드 흐름에 적용됩니다. 여기서는 지원되는 기본 모델만 선택할 수 있습니다.",
"searchPlaceholder": "기본 모델 필터링...",
"empty": "현재 검색과 일치하는 기본 모델이 없습니다.",
"summary": {
"none": "선택 없음",
"count": "{count}개 선택됨"
},
"actions": {
"edit": "편집",
"collapse": "접기",
"clear": "지우기"
},
"validation": {
"saveFailed": "제외된 기본 모델을 저장할 수 없습니다: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "이전에 다운로드한 모델 버전 건너뛰기",
"help": "활성화하면 다운로드 기록 서비스가 해당 버전이 이미 다운로드되었음을 기록한 경우 LoRA Manager는 해당 모델 버전 다운로드를 건너뜁니다. 모든 다운로드 플로우에 적용됩니다."
},
"layoutSettings": {
"displayDensity": "표시 밀도",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "호버 시 표시"
},
"cardInfoDisplayHelp": "모델 정보 및 액션 버튼을 언제 표시할지 선택하세요",
"showVersionOnCard": "카드에 버전 표시",
"showVersionOnCardHelp": "모델 카드에 버전 이름 표시 여부를 전환합니다",
"modelCardFooterAction": "모델 카드 버튼 동작",
"modelCardFooterActionOptions": {
"exampleImages": "예시 이미지 열기",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "다운로드, 가져오기 및 이동을 위한 기본 Diffusion Model (UNET) 루트 디렉토리를 설정합니다",
"defaultEmbeddingRoot": "Embedding 루트",
"defaultEmbeddingRootHelp": "다운로드, 가져오기 및 이동을 위한 기본 Embedding 루트 디렉토리를 설정합니다",
"recipesPath": "레시피 저장 경로",
"recipesPathHelp": "저장된 레시피를 위한 선택적 사용자 지정 디렉터리입니다. 비워 두면 첫 번째 LoRA 루트의 recipes 폴더를 사용합니다.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "레시피 저장 경로를 이동 중...",
"noDefault": "기본값 없음"
},
"extraFolderPaths": {
"title": "추가 폴다 경로",
"description": "LoRA Manager 전용 추가 모델 루트 경로입니다. ComfyUI의 표준 폴더 외부 위치에서 모델을 로드하여 대규모 라이브러리로 인한 성능 저하를 방지합니다.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "LoRA 경로",
"checkpoint": "Checkpoint 경로",
"unet": "Diffusion 모델 경로",
"embedding": "Embedding 경로"
},
"pathPlaceholder": "/추가/모델/경로",
"saveSuccess": "추가 폴다 경로가 업데이트되었습니다. 변경 사항을 적용하려면 재시작이 필요합니다.",
"saveError": "추가 폴다 경로 업데이트 실패: {message}",
"validation": {
"duplicatePath": "이 경로는 이미 구성되어 있습니다"
}
},
"priorityTags": {
"title": "우선순위 태그",
"description": "모델 유형별 태그 우선순위를 사용자 지정합니다(예: character, concept, style(toon|toon_style)).",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Civitai의 예시 이미지가 저장될 폴더 경로를 입력하세요",
"autoDownload": "예시 이미지 자동 다운로드",
"autoDownloadHelp": "예시 이미지가 없는 모델의 예시 이미지를 자동으로 다운로드합니다 (다운로드 위치 설정 필요)",
"openMode": "예시 이미지 열기 동작",
"openModeHelp": "서버에서 열지, 매핑된 로컬 경로를 복사할지, 사용자 지정 URI를 실행할지 선택합니다.",
"openModeOptions": {
"system": "서버에서 열기",
"clipboard": "로컬 경로 복사",
"uriTemplate": "사용자 지정 URI 열기"
},
"localRoot": "로컬 예시 이미지 루트",
"localRootHelp": "서버 예시 이미지 디렉터리를 반영하는 선택적 로컬 또는 마운트된 루트입니다. 비워 두면 서버 경로를 재사용합니다.",
"localRootPlaceholder": "예: /Volumes/ComfyUI/example_images",
"uriTemplate": "URI 템플릿 열기",
"uriTemplateHelp": "파일 URI 또는 Shortcuts 링크 같은 사용자 지정 딥링크를 사용합니다.",
"uriTemplatePlaceholder": "예: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "사용 가능한 플레이스홀더: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "원격 열기 모드에 대해 자세히 알아보기",
"optimizeImages": "다운로드된 이미지 최적화",
"optimizeImagesHelp": "파일 크기를 줄이고 로딩 속도를 향상시키기 위해 예시 이미지를 최적화합니다 (메타데이터는 보존됨)",
"download": "다운로드",
@@ -485,23 +619,6 @@
"proxyPassword": "비밀번호 (선택사항)",
"proxyPasswordPlaceholder": "password",
"proxyPasswordHelp": "프록시 인증에 필요한 비밀번호 (필요한 경우)"
},
"extraFolderPaths": {
"title": "추가 폴다 경로",
"help": "ComfyUI의 표준 경로 외부에 추가 모델 폴드를 추가하세요. 이러한 경로는 별도로 저장되며 기본 폴와 함께 스캔됩니다.",
"description": "모델을 스캔하기 위한 추가 폴를 설정하세요. 이러한 경로는 LoRA Manager 특유의 것이며 ComfyUI의 기본 경로와 병합됩니다.",
"modelTypes": {
"lora": "LoRA 경로",
"checkpoint": "Checkpoint 경로",
"unet": "Diffusion 모델 경로",
"embedding": "Embedding 경로"
},
"pathPlaceholder": "/추가/모델/경로",
"saveSuccess": "추가 폴다 경로가 업데이트되었습니다.",
"saveError": "추가 폴다 경로 업데이트 실패: {message}",
"validation": {
"duplicatePath": "이 경로는 이미 구성되어 있습니다"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "자동 정리 선택",
"skipMetadataRefresh": "선택한 모델의 메타데이터 새로고침 건너뛰기",
"resumeMetadataRefresh": "선택한 모델의 메타데이터 새로고침 재개",
"deleteAll": "모든 모델 삭제",
"deleteAll": "선택된 항목 삭제",
"downloadMissingLoras": "누락된 LoRA 다운로드",
"clear": "선택 지우기",
"skipMetadataRefreshCount": "건너뛰기({count}개 모델)",
"resumeMetadataRefreshCount": "재개({count}개 모델)",
@@ -600,6 +718,7 @@
"moveToFolder": "폴더로 이동",
"repairMetadata": "메타데이터 복구",
"excludeModel": "모델 제외",
"restoreModel": "모델 복원",
"deleteModel": "모델 삭제",
"shareRecipe": "레시피 공유",
"viewAllLoras": "모든 LoRA 보기",
@@ -641,6 +760,8 @@
"root": "루트",
"browseFolders": "폴더 탐색:",
"downloadAndSaveRecipe": "다운로드 및 레시피 저장",
"importRecipeOnly": "레시피만 가져오기",
"importAndDownload": "가져오기 및 다운로드",
"downloadMissingLoras": "누락된 LoRA 다운로드",
"saveRecipe": "레시피 저장",
"loraCountInfo": "({existing}/{total} 라이브러리에 있음)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "적은순"
},
"refresh": {
"title": "레시피 목록 새로고침"
"title": "레시피 목록 새로고침",
"quick": "변경 사항 동기화",
"quickTooltip": "변경 사항 동기화 - 캐시를 재구성하지 않고 빠른 새로고침",
"full": "캐시 재구성",
"fullTooltip": "캐시 재구성 - 모든 레시피 파일을 완전히 다시 스캔"
},
"filteredByLora": "LoRA로 필터링됨",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "레시피 복구 실패: {message}",
"missingId": "레시피를 복구할 수 없음: 레시피 ID 누락"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "{otherType} 폴더로 이동"
"moveToOtherTypeFolder": "{otherType} 폴더로 이동",
"sendToWorkflow": "워크플로우로 전송"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "사이드바 고정 해제",
"switchToListView": "목록 보기로 전환",
"switchToTreeView": "트리 보기로 전환",
"recursiveOn": "하위 폴더 검색",
"recursiveOff": "현재 폴더만 검색",
"recursiveOn": "하위 폴더 포함",
"recursiveOff": "현재 폴더만",
"recursiveUnavailable": "재귀 검색은 트리 보기에서만 사용할 수 있습니다",
"collapseAllDisabled": "목록 보기에서는 사용할 수 없습니다",
"dragDrop": {
"unableToResolveRoot": "이동할 대상 경로를 확인할 수 없습니다.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "이 항목은 이동을 지원하지 않습니다.",
"createFolderHint": "놓아서 새 폴더 만들기",
"newFolderName": "새 폴더 이름",
"folderNameHint": "Enter를 눌러 확인, Escape를 눌러 취소",
"emptyFolderName": "폴더 이름을 입력하세요",
"invalidFolderName": "폴더 이름에 잘못된 문자가 포함되어 있습니다",
"noDragState": "보류 중인 드래그 작업을 찾을 수 없습니다"
},
"empty": {
"noFolders": "폴더를 찾을 수 없습니다",
"dragHint": "항목을 여기로 드래그하여 폴더를 만듭니다"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "얼리 액세스",
"earlyAccessTooltip": "얼리 액세스 필요",
"inLibrary": "라이브러리에 있음",
"downloaded": "다운로드됨",
"downloadedTooltip": "이전에 다운로드했지만 현재 라이브러리에 없습니다.",
"alreadyInLibrary": "이미 라이브러리에 있음",
"autoOrganizedPath": "[경로 템플릿으로 자동 정리됨]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "베이스 모델 업데이트",
"cancel": "취소"
},
"bulkDownloadMissingLoras": {
"title": "누락된 LoRA 다운로드",
"message": "선택한 레시피에서 총 {totalCount}개 중 {uniqueCount}개의 고유한 누락된 LoRA를 찾았습니다.",
"previewTitle": "다운로드할 LoRA:",
"moreItems": "...그리고 {count}개 더",
"note": "파일은 기본 경로 템플릿을 사용하여 다운로드됩니다. LoRA의 수에 따라 다소 시간이 걸릴 수 있습니다.",
"downloadButton": "{count}개 LoRA 다운로드"
},
"exampleAccess": {
"title": "로컬 예시 이미지",
"message": "이 모델의 로컬 예시 이미지를 찾을 수 없습니다. 보기 옵션:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Civitai에서 보기",
"viewOnCivitaiText": "Civitai에서 보기",
"viewCreatorProfile": "제작자 프로필 보기",
"openFileLocation": "파일 위치 열기"
"openFileLocation": "파일 위치 열기",
"sendToWorkflow": "ComfyUI로 보내기",
"sendToWorkflowText": "ComfyUI로 보내기"
},
"openFileLocation": {
"success": "파일 위치가 성공적으로 열렸습니다",
@@ -964,6 +1170,9 @@
"copied": "경로가 클립보드에 복사되었습니다: {{path}}",
"clipboardFallback": "경로: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "ComfyUI로 보낼 수 없습니다: 파일 경로가 없습니다"
},
"metadata": {
"version": "버전",
"fileName": "파일명",
@@ -1000,6 +1209,8 @@
"cancel": "편집 취소",
"save": "변경사항 저장",
"addPlaceholder": "입력하거나 아래 제안을 클릭하세요",
"editWord": "트리거 단어 편집",
"editPlaceholder": "트리거 단어 편집",
"copyWord": "트리거 단어 복사",
"deleteWord": "트리거 단어 삭제",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "{count}일 후"
},
"badges": {
"current": "현재 버전",
"current": "열린 버전",
"currentTooltip": "이 모달을 열 때 사용한 버전입니다",
"inLibrary": "라이브러리에 있음",
"inLibraryTooltip": "이 버전은 로컬 라이브러리에 있습니다",
"downloaded": "다운로드됨",
"downloadedTooltip": "이 버전은 이전에 다운로드되었지만 현재는 라이브러리에 없습니다",
"newer": "최신 버전",
"newerTooltip": "이 버전은 로컬의 최신 버전보다 더 새롭습니다",
"earlyAccess": "얼리 액세스",
"ignored": "무시됨"
"earlyAccessTooltip": "이 버전은 현재 Civitai 얼리 액세스가 필요합니다",
"ignored": "무시됨",
"ignoredTooltip": "이 버전은 업데이트 알림이 비활성화되어 있습니다",
"onSiteOnly": "사이트 내 전용",
"onSiteOnlyTooltip": "이 버전은 Civitai 사이트 내에서만 사용 가능하며 다운로드할 수 없습니다"
},
"actions": {
"download": "다운로드",
"downloadTooltip": "이 버전 다운로드",
"downloadEarlyAccessTooltip": "Civitai에서 이 얼리 액세스 버전 다운로드",
"downloadNotAllowedTooltip": "이 버전은 Civitai 사이트 내에서만 사용 가능하며 다운로드할 수 없습니다",
"delete": "삭제",
"deleteTooltip": "이 로컬 버전 삭제",
"ignore": "무시",
"unignore": "무시 해제",
"ignoreTooltip": "이 버전의 업데이트 알림 무시",
"unignoreTooltip": "이 버전의 업데이트 알림 다시 받기",
"viewVersionOnCivitai": "Civitai에서 버전 보기",
"earlyAccessTooltip": "얼리 액세스 구매 필요",
"resumeModelUpdates": "이 모델 업데이트 재개",
"ignoreModelUpdates": "이 모델 업데이트 무시",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "레시피가 워크플로에서 교체되었습니다",
"recipeFailedToSend": "레시피를 워크플로로 전송하지 못했습니다",
"noMatchingNodes": "현재 워크플로에서 호환되는 노드가 없습니다",
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다"
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다",
"modelUpdated": "모델이 워크플로에서 업데이트되었습니다",
"modelFailed": "모델 노드 업데이트 실패"
},
"nodeSelector": {
"recipe": "레시피",
@@ -1235,6 +1464,10 @@
"opened": "예시 이미지 폴더가 열렸습니다",
"openingFolder": "예시 이미지 폴더를 여는 중",
"failedToOpen": "예시 이미지 폴더 열기 실패",
"copiedPath": "경로를 클립보드에 복사했습니다: {{path}}",
"clipboardFallback": "경로: {{path}}",
"copiedUri": "링크를 클립보드에 복사했습니다: {{uri}}",
"uriClipboardFallback": "링크: {{uri}}",
"setupRequired": "예시 이미지 저장소",
"setupDescription": "사용자 지정 예시 이미지를 추가하려면 먼저 다운로드 위치를 설정해야 합니다.",
"setupUsage": "이 경로는 다운로드한 예시 이미지와 사용자 지정 이미지 모두에 사용됩니다.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "WeChat QR 코드 표시",
"hideWechatQR": "WeChat QR 코드 숨기기"
},
"footer": "LoRA Manager를 사용해주셔서 감사합니다! ❤️"
"footer": "LoRA Manager를 사용해주셔서 감사합니다! ❤️",
"supporters": {
"title": "후원자 분들께 감사드립니다",
"subtitle": "이 프로젝트를 가능하게 해준 {count}명의 후원자분들께 감사드립니다",
"specialThanks": "특별 감사",
"allSupporters": "모든 후원자",
"totalCount": "총 {count}명의 후원자"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "버전을 선택해주세요",
"versionExists": "이 버전은 이미 라이브러리에 있습니다",
"downloadCompleted": "다운로드가 성공적으로 완료되었습니다",
"downloadSkippedByBaseModel": "기본 모델 {baseModel}이(가) 제외되어 다운로드를 건너뛰었습니다",
"autoOrganizeSuccess": "{count}개의 {type}에 대해 자동 정리가 성공적으로 완료되었습니다",
"autoOrganizePartialSuccess": "자동 정리 완료: 전체 {total}개 중 {success}개 이동, {failures}개 실패",
"autoOrganizeFailed": "자동 정리 실패: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "{modelType} 로딩 실패: {message}",
"refreshComplete": "새로고침 완료",
"refreshFailed": "레시피 새로고침 실패: {message}",
"syncComplete": "동기화 완료",
"syncFailed": "레시피 동기화 실패: {message}",
"updateFailed": "레시피 업데이트 실패: {error}",
"updateError": "레시피 업데이트 오류: {message}",
"nameSaved": "레시피 \"{name}\"이 성공적으로 저장되었습니다",
"nameUpdated": "레시피 이름이 성공적으로 업데이트되었습니다",
"tagsUpdated": "레시피 태그가 성공적으로 업데이트되었습니다",
"sourceUrlUpdated": "소스 URL이 성공적으로 업데이트되었습니다",
"promptUpdated": "프롬프트가 성공적으로 업데이트되었습니다",
"negativePromptUpdated": "네거티브 프롬프트가 성공적으로 업데이트되었습니다",
"promptEditorHint": "Enter 키를 눌러 저장, Shift+Enter로 새 줄",
"noRecipeId": "사용 가능한 레시피 ID가 없습니다",
"sendToWorkflowFailed": "워크플로우에 레시피 보내기 실패: {message}",
"copyFailed": "레시피 문법 복사 오류: {message}",
"noMissingLoras": "다운로드할 누락된 LoRA가 없습니다",
"missingLorasInfoFailed": "누락된 LoRA 정보를 가져오는데 실패했습니다",
@@ -1410,9 +1657,20 @@
"processingError": "처리 오류: {message}",
"folderBrowserError": "폴더 브라우저 로딩 오류: {message}",
"recipeSaveFailed": "레시피 저장 실패: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "가져오기 실패: {message}",
"folderTreeFailed": "폴더 트리 로딩 실패",
"folderTreeError": "폴더 트리 로딩 오류"
"folderTreeError": "폴더 트리 로딩 오류",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "선택한 레시피가 없습니다",
"noMissingLorasInSelection": "선택한 레시피에서 누락된 LoRA를 찾을 수 없습니다",
"noLoraRootConfigured": "LoRA 루트 디렉토리가 구성되지 않았습니다. 설정에서 기본 LoRA 루트를 설정하세요."
},
"models": {
"noModelsSelected": "선택된 모델이 없습니다",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "베이스 모델 매핑 저장 실패: {message}",
"downloadTemplatesUpdated": "다운로드 경로 템플릿이 업데이트되었습니다",
"downloadTemplatesFailed": "다운로드 경로 템플릿 저장 실패: {message}",
"recipesPathUpdated": "레시피 저장 경로가 업데이트되었습니다",
"recipesPathSaveFailed": "레시피 저장 경로 업데이트 실패: {message}",
"settingsUpdated": "설정 업데이트됨: {setting}",
"compactModeToggled": "컴팩트 모드 {state}",
"settingSaveFailed": "설정 저장 실패: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "학습된 단어를 로딩할 수 없습니다",
"tooLong": "트리거 단어는 100단어를 초과할 수 없습니다",
"tooMany": "최대 30개의 트리거 단어만 허용됩니다",
"tooLong": "트리거 단어는 500단어를 초과할 수 없습니다",
"tooMany": "최대 100개의 트리거 단어만 허용됩니다",
"alreadyExists": "이 트리거 단어는 이미 존재합니다",
"updateSuccess": "트리거 단어가 성공적으로 업데이트되었습니다",
"updateFailed": "트리거 단어 업데이트에 실패했습니다",
@@ -1591,6 +1851,8 @@
"deleteFailed": "{type} 삭제 실패: {message}",
"excludeSuccess": "{type}이(가) 성공적으로 제외되었습니다",
"excludeFailed": "{type} 제외 실패: {message}",
"restoreSuccess": "{type} 복원 완료",
"restoreFailed": "{type} 복원 실패: {message}",
"fileNameUpdated": "파일명이 성공적으로 업데이트되었습니다",
"fileRenameFailed": "파일 이름 변경 실패: {error}",
"previewUpdated": "미리보기가 성공적으로 업데이트되었습니다",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "시스템 진단",
"title": "닥터",
"buttonTitle": "진단 및 일반적인 수정 실행",
"loading": "환경을 확인하는 중...",
"footer": "수리 후에도 문제가 계속되면 진단 번들을 내보내세요.",
"summary": {
"idle": "설정, 캐시 무결성, UI 일관성에 대한 상태 검사를 실행합니다.",
"ok": "현재 환경에서 활성 문제를 찾지 못했습니다.",
"warning": "{count}개의 문제가 발견되었습니다. 대부분은 이 패널에서 바로 해결할 수 있습니다.",
"error": "앱이 완전히 정상 상태가 되기 전에 {count}개의 문제를 처리해야 합니다."
},
"status": {
"ok": "정상",
"warning": "주의 필요",
"error": "조치 필요"
},
"actions": {
"runAgain": "다시 실행",
"exportBundle": "번들 내보내기"
},
"toast": {
"loadFailed": "진단 로드 실패: {message}",
"repairSuccess": "캐시 재구성이 완료되었습니다.",
"repairFailed": "캐시 재구성 실패: {message}",
"exportSuccess": "진단 번들이 내보내졌습니다.",
"exportFailed": "진단 번들 내보내기 실패: {message}",
"conflictsResolved": "{count}개 파일명 충돌이 해결되었습니다.",
"conflictsResolveFailed": "파일명 충돌 해결 실패: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "애플리케이션 업데이트 감지",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "Отмена",
"confirm": "Подтвердить",
"actions": {
"save": "Сохранить",
"cancel": "Отмена",
"confirm": "Подтвердить",
"delete": "Удалить",
"move": "Переместить",
"refresh": "Обновить",
@@ -11,7 +14,9 @@
"backToTop": "Наверх",
"settings": "Настройки",
"help": "Справка",
"add": "Добавить"
"add": "Добавить",
"close": "Закрыть",
"menu": "Меню"
},
"status": {
"loading": "Загрузка...",
@@ -171,6 +176,9 @@
"success": "Успешно восстановлено {count} рецептов.",
"cancelled": "Восстановление отменено. {count} рецептов было восстановлено.",
"error": "Ошибка восстановления рецептов: {message}"
},
"manageExcludedModels": {
"label": "Управление исключёнными моделями"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "Пресет \"{name}\" уже существует. Перезаписать?",
"presetNamePlaceholder": "Имя пресета...",
"baseModel": "Базовая модель",
"baseModelSearchPlaceholder": "Поиск базовых моделей...",
"modelTags": "Теги (Топ 20)",
"modelTypes": "Model Types",
"modelTypes": "Типы моделей",
"license": "Лицензия",
"noCreditRequired": "Без указания авторства",
"allowSellingGeneratedContent": "Продажа разрешена",
"noTags": "Без тегов",
"noBaseModelMatches": "Нет базовых моделей, соответствующих текущему поиску.",
"clearAll": "Очистить все фильтры",
"any": "Любой",
"all": "Все",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Ключ API Civitai",
"civitaiApiKeyPlaceholder": "Введите ваш ключ API Civitai",
"civitaiApiKeyHelp": "Используется для аутентификации при загрузке моделей с Civitai",
"civitaiHost": {
"label": "Хост Civitai",
"help": "Выберите, какой сайт Civitai будет открываться при использовании ссылок «View on Civitai».",
"options": {
"com": "civitai.com (только SFW)",
"red": "civitai.red (без ограничений)"
}
},
"downloadBackend": {
"label": "Бэкенд загрузки",
"help": "Выберите способ загрузки файлов моделей. Python использует встроенный загрузчик. aria2 использует экспериментальный внешний процесс загрузки.",
"options": {
"python": "Python (встроенный)",
"aria2": "aria2 (экспериментальный)"
}
},
"aria2cPath": {
"label": "Путь к aria2c",
"help": "Необязательный путь к исполняемому файлу aria2c. Оставьте пустым, чтобы использовать aria2c из системного PATH.",
"placeholder": "Оставьте пустым, чтобы использовать aria2c из PATH"
},
"aria2HelpLink": "Узнайте, как настроить сервер загрузки aria2",
"civitaiHostBanner": {
"title": "Доступна настройка хоста Civitai",
"content": "Теперь Civitai использует civitai.com для контента SFW и civitai.red для контента без ограничений. В настройках можно изменить, какой сайт открывать по умолчанию.",
"openSettings": "Открыть настройки"
},
"openSettingsFileLocation": {
"label": "Открыть папку настроек",
"tooltip": "Открыть папку, содержащую settings.json",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "Фильтрация контента",
"downloads": "Загрузки",
"videoSettings": "Настройки видео",
"layoutSettings": "Настройки макета",
"misc": "Разное",
"backup": "Резервные копии",
"folderSettings": "Корневые папки",
"recipeSettings": "Рецепты",
"extraFolderPaths": "Дополнительные пути к папкам",
"downloadPathTemplates": "Шаблоны путей загрузки",
"priorityTags": "Приоритетные теги",
@@ -287,7 +327,15 @@
"blurNsfwContent": "Размывать NSFW контент",
"blurNsfwContentHelp": "Размывать превью изображений контента для взрослых (NSFW)",
"showOnlySfw": "Показывать только SFW результаты",
"showOnlySfwHelp": "Фильтровать весь NSFW контент при просмотре и поиске"
"showOnlySfwHelp": "Фильтровать весь NSFW контент при просмотре и поиске",
"matureBlurThreshold": "Порог размытия взрослого контента",
"matureBlurThresholdHelp": "Установить, с какого уровня рейтинга начинается размытие при включенном размытии NSFW.",
"matureBlurThresholdOptions": {
"pg13": "PG13 и выше",
"r": "R и выше (по умолчанию)",
"x": "X и выше",
"xxx": "Только XXX"
}
},
"videoSettings": {
"autoplayOnHover": "Автовоспроизведение видео при наведении",
@@ -311,6 +359,54 @@
"saveFailed": "Не удалось сохранить пути для пропуска: {message}"
}
},
"backup": {
"autoEnabled": "Автоматические резервные копии",
"autoEnabledHelp": "Создаёт локальный снимок раз в день и хранит последние снимки согласно политике хранения.",
"retention": "Количество хранения",
"retentionHelp": "Сколько автоматических снимков сохранять перед удалением старых.",
"management": "Управление резервными копиями",
"managementHelp": "Экспортируйте текущее состояние пользователя или восстановите его из архива резервной копии.",
"scopeHelp": "Резервная копия включает ваши настройки, историю загрузок и состояние обновлений моделей. Файлы моделей и пересоздаваемые кэши не входят.",
"locationSummary": "Текущее расположение резервных копий",
"openFolderButton": "Открыть папку резервных копий",
"openFolderSuccess": "Папка резервных копий открыта",
"openFolderFailed": "Не удалось открыть папку резервных копий",
"locationCopied": "Путь к резервной копии скопирован в буфер обмена: {{path}}",
"locationClipboardFallback": "Путь к резервной копии: {{path}}",
"exportButton": "Экспортировать резервную копию",
"exportSuccess": "Резервная копия успешно экспортирована.",
"exportFailed": "Не удалось экспортировать резервную копию: {message}",
"importButton": "Импортировать резервную копию",
"importConfirm": "Импортировать эту резервную копию и перезаписать локальное состояние пользователя?",
"importSuccess": "Резервная копия успешно импортирована.",
"importFailed": "Не удалось импортировать резервную копию: {message}",
"latestSnapshot": "Последний снимок",
"latestAutoSnapshot": "Последний автоматический снимок",
"snapshotCount": "Сохранённые снимки",
"noneAvailable": "Снимков пока нет"
},
"downloadSkipBaseModels": {
"label": "Пропускать загрузки для базовых моделей",
"help": "Применяется ко всем сценариям загрузки. Здесь можно выбрать только поддерживаемые базовые модели.",
"searchPlaceholder": "Фильтровать базовые модели...",
"empty": "Нет базовых моделей, соответствующих текущему поиску.",
"summary": {
"none": "Ничего не выбрано",
"count": "Выбрано: {count}"
},
"actions": {
"edit": "Изменить",
"collapse": "Свернуть",
"clear": "Очистить"
},
"validation": {
"saveFailed": "Не удалось сохранить исключённые базовые модели: {message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "Пропускать ранее загруженные версии моделей",
"help": "Если включено, LoRA Manager будет пропускать загрузку версии модели, если сервис истории загрузок записал, что эта конкретная версия уже загружена. Применяется ко всем потокам загрузки."
},
"layoutSettings": {
"displayDensity": "Плотность отображения",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "Показать при наведении"
},
"cardInfoDisplayHelp": "Выберите когда отображать информацию о модели и кнопки действий",
"showVersionOnCard": "Показывать версию на карточке",
"showVersionOnCardHelp": "Показать или скрыть название версии на карточках моделей",
"modelCardFooterAction": "Действие кнопки карточки модели",
"modelCardFooterActionOptions": {
"exampleImages": "Открыть примеры изображений",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "Установить корневую папку Diffusion Model (UNET) по умолчанию для загрузок, импорта и перемещений",
"defaultEmbeddingRoot": "Корневая папка Embedding",
"defaultEmbeddingRootHelp": "Установить корневую папку embedding по умолчанию для загрузок, импорта и перемещений",
"recipesPath": "Путь хранения рецептов",
"recipesPathHelp": "Дополнительный пользовательский каталог для сохранённых рецептов. Оставьте пустым, чтобы использовать папку recipes в первом корне LoRA.",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "Перенос хранилища рецептов...",
"noDefault": "Не задано"
},
"extraFolderPaths": {
"title": "Дополнительные пути к папкам",
"description": "Дополнительные корневые пути моделей, эксклюзивные для LoRA Manager. Загружайте модели из расположений за пределами стандартных папок ComfyUI — идеально подходит для больших библиотек, которые иначе замедлили бы ComfyUI.",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "Пути LoRA",
"checkpoint": "Пути Checkpoint",
"unet": "Пути моделей диффузии",
"embedding": "Пути Embedding"
},
"pathPlaceholder": "/путь/к/дополнительным/моделям",
"saveSuccess": "Дополнительные пути к папкам обновлены. Требуется перезапуск для применения изменений.",
"saveError": "Не удалось обновить дополнительные пути к папкам: {message}",
"validation": {
"duplicatePath": "Этот путь уже настроен"
}
},
"priorityTags": {
"title": "Приоритетные теги",
"description": "Настройте порядок приоритетов тегов для каждого типа моделей (например, character, concept, style(toon|toon_style)).",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "Введите путь к папке, где будут сохраняться примеры изображений с Civitai",
"autoDownload": "Автозагрузка примеров изображений",
"autoDownloadHelp": "Автоматически загружать примеры изображений для моделей, у которых их нет (требует настройки места загрузки)",
"openMode": "Действие открытия примеров изображений",
"openModeHelp": "Выберите, будет ли действие открывать папку на сервере, копировать сопоставленный локальный путь или запускать пользовательский URI.",
"openModeOptions": {
"system": "Открыть на сервере",
"clipboard": "Скопировать локальный путь",
"uriTemplate": "Открыть пользовательский URI"
},
"localRoot": "Локальный корень примеров изображений",
"localRootHelp": "Необязательный локальный или смонтированный корневой путь, отражающий каталог примеров изображений на сервере. Если оставить пустым, будет использован путь сервера.",
"localRootPlaceholder": "Пример: /Volumes/ComfyUI/example_images",
"uriTemplate": "Шаблон URI для открытия",
"uriTemplateHelp": "Используйте пользовательскую deep link-ссылку, например file URI или ссылку Shortcuts.",
"uriTemplatePlaceholder": "Пример: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "Доступные плейсхолдеры: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
"openModeWikiLink": "Подробнее об удаленных режимах открытия",
"optimizeImages": "Оптимизировать загруженные изображения",
"optimizeImagesHelp": "Оптимизировать примеры изображений для уменьшения размера файла и улучшения скорости загрузки (метаданные будут сохранены)",
"download": "Загрузить",
@@ -485,23 +619,6 @@
"proxyPassword": "Пароль (необязательно)",
"proxyPasswordPlaceholder": "пароль",
"proxyPasswordHelp": "Пароль для аутентификации на прокси (если требуется)"
},
"extraFolderPaths": {
"title": "Дополнительные пути к папкам",
"help": "Добавьте дополнительные папки моделей за пределами стандартных путей ComfyUI. Эти пути хранятся отдельно и сканируются вместе с папками по умолчанию.",
"description": "Настройте дополнительные папки для сканирования моделей. Эти пути специфичны для LoRA Manager и будут объединены с путями по умолчанию ComfyUI.",
"modelTypes": {
"lora": "Пути LoRA",
"checkpoint": "Пути Checkpoint",
"unet": "Пути моделей диффузии",
"embedding": "Пути Embedding"
},
"pathPlaceholder": "/путь/к/дополнительным/моделям",
"saveSuccess": "Дополнительные пути к папкам обновлены.",
"saveError": "Не удалось обновить дополнительные пути к папкам: {message}",
"validation": {
"duplicatePath": "Этот путь уже настроен"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "Автоматически организовать выбранные",
"skipMetadataRefresh": "Пропустить обновление метаданных для выбранных",
"resumeMetadataRefresh": "Возобновить обновление метаданных для выбранных",
"deleteAll": "Удалить все модели",
"deleteAll": "Удалить выбранные",
"downloadMissingLoras": "Скачать отсутствующие LoRAs",
"clear": "Очистить выбор",
"skipMetadataRefreshCount": "Пропустить({count} моделей)",
"resumeMetadataRefreshCount": "Возобновить({count} моделей)",
@@ -600,6 +718,7 @@
"moveToFolder": "Переместить в папку",
"repairMetadata": "Восстановить метаданные",
"excludeModel": "Исключить модель",
"restoreModel": "Восстановить модель",
"deleteModel": "Удалить модель",
"shareRecipe": "Поделиться рецептом",
"viewAllLoras": "Посмотреть все LoRAs",
@@ -641,6 +760,8 @@
"root": "Корень",
"browseFolders": "Обзор папок:",
"downloadAndSaveRecipe": "Скачать и сохранить рецепт",
"importRecipeOnly": "Импортировать только рецепт",
"importAndDownload": "Импорт и скачивание",
"downloadMissingLoras": "Скачать отсутствующие LoRAs",
"saveRecipe": "Сохранить рецепт",
"loraCountInfo": "({existing}/{total} в библиотеке)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "Меньше всего"
},
"refresh": {
"title": "Обновить список рецептов"
"title": "Обновить список рецептов",
"quick": "Синхронизировать изменения",
"quickTooltip": "Синхронизировать изменения - быстрое обновление без перестроения кэша",
"full": "Перестроить кэш",
"fullTooltip": "Перестроить кэш - полное повторное сканирование всех файлов рецептов"
},
"filteredByLora": "Фильтр по LoRA",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "Не удалось восстановить рецепт: {message}",
"missingId": "Не удалось восстановить рецепт: отсутствует ID рецепта"
}
},
"batchImport": {
"title": "Batch Import Recipes",
"action": "Batch Import",
"urlList": "URL List",
"directory": "Directory",
"urlDescription": "Enter image URLs or local file paths (one per line). Each will be imported as a recipe.",
"directoryDescription": "Enter a directory path to import all images from that folder.",
"urlsLabel": "Image URLs or Local Paths",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "Enter one URL or path per line",
"directoryPath": "Directory Path",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "Browse",
"recursive": "Include subdirectories",
"tagsOptional": "Tags (optional, applied to all recipes)",
"tagsPlaceholder": "Enter tags separated by commas",
"tagsHint": "Tags will be added to all imported recipes",
"skipNoMetadata": "Skip images without metadata",
"skipNoMetadataHelp": "Images without LoRA metadata will be skipped automatically.",
"start": "Start Import",
"startImport": "Start Import",
"importing": "Importing...",
"progress": "Progress",
"total": "Total",
"success": "Success",
"failed": "Failed",
"skipped": "Skipped",
"current": "Current",
"currentItem": "Current",
"preparing": "Preparing...",
"cancel": "Cancel",
"cancelImport": "Cancel",
"cancelled": "Import cancelled",
"completed": "Import completed",
"completedWithErrors": "Completed with errors",
"completedSuccess": "Successfully imported {count} recipe(s)",
"successCount": "Successful",
"failedCount": "Failed",
"skippedCount": "Skipped",
"totalProcessed": "Total processed",
"viewDetails": "View Details",
"newImport": "New Import",
"manualPathEntry": "Please enter the directory path manually. File browser is not available in this browser.",
"batchImportDirectorySelected": "Directory selected: {path}",
"batchImportManualEntryRequired": "File browser not available. Please enter the directory path manually.",
"backToParent": "Back to parent directory",
"folders": "Folders",
"folderCount": "{count} folders",
"imageFiles": "Image Files",
"images": "images",
"imageCount": "{count} images",
"selectFolder": "Select This Folder",
"errors": {
"enterUrls": "Please enter at least one URL or path",
"enterDirectory": "Please enter a directory path",
"startFailed": "Failed to start import: {message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "Переместить в папку {otherType}"
"moveToOtherTypeFolder": "Переместить в папку {otherType}",
"sendToWorkflow": "Отправить в workflow"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "Открепить боковую панель",
"switchToListView": "Переключить на вид списка",
"switchToTreeView": "Переключить на древовидный вид",
"recursiveOn": "Искать во вложенных папках",
"recursiveOff": "Искать только в текущей папке",
"recursiveOn": "Включать вложенные папки",
"recursiveOff": "Только текущая папка",
"recursiveUnavailable": "Рекурсивный поиск доступен только в режиме дерева",
"collapseAllDisabled": "Недоступно в виде списка",
"dragDrop": {
"unableToResolveRoot": "Не удалось определить путь назначения для перемещения.",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Перемещение этого элемента не поддерживается.",
"createFolderHint": "Отпустите, чтобы создать новую папку",
"newFolderName": "Имя новой папки",
"folderNameHint": "Нажмите Enter для подтверждения, Escape для отмены",
"emptyFolderName": "Пожалуйста, введите имя папки",
"invalidFolderName": "Имя папки содержит недопустимые символы",
"noDragState": "Ожидающая операция перетаскивания не найдена"
},
"empty": {
"noFolders": "Папки не найдены",
"dragHint": "Перетащите элементы сюда, чтобы создать папки"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "Ранний доступ",
"earlyAccessTooltip": "Требуется ранний доступ",
"inLibrary": "В библиотеке",
"downloaded": "Загружено",
"downloadedTooltip": "Ранее загружено, но сейчас этого нет в вашей библиотеке.",
"alreadyInLibrary": "Уже в библиотеке",
"autoOrganizedPath": "[Автоматически организовано по шаблону пути]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "Обновить базовую модель",
"cancel": "Отмена"
},
"bulkDownloadMissingLoras": {
"title": "Скачать отсутствующие LoRAs",
"message": "Найдено {uniqueCount} уникальных отсутствующих LoRAs (из {totalCount} всего в выбранных рецептах).",
"previewTitle": "LoRAs для скачивания:",
"moreItems": "...и еще {count}",
"note": "Файлы будут скачаны с использованием шаблонов путей по умолчанию. Это может занять некоторое время в зависимости от количества LoRAs.",
"downloadButton": "Скачать {count} LoRA(s)"
},
"exampleAccess": {
"title": "Локальные примеры изображений",
"message": "Локальные примеры изображений для этой модели не найдены. Варианты просмотра:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "Посмотреть на Civitai",
"viewOnCivitaiText": "Посмотреть на Civitai",
"viewCreatorProfile": "Посмотреть профиль создателя",
"openFileLocation": "Открыть расположение файла"
"openFileLocation": "Открыть расположение файла",
"sendToWorkflow": "Отправить в ComfyUI",
"sendToWorkflowText": "Отправить в ComfyUI"
},
"openFileLocation": {
"success": "Расположение файла успешно открыто",
@@ -964,6 +1170,9 @@
"copied": "Путь скопирован в буфер обмена: {{path}}",
"clipboardFallback": "Путь: {{path}}"
},
"sendToWorkflow": {
"noFilePath": "Невозможно отправить в ComfyUI: путь к файлу недоступен"
},
"metadata": {
"version": "Версия",
"fileName": "Имя файла",
@@ -1000,6 +1209,8 @@
"cancel": "Отменить редактирование",
"save": "Сохранить изменения",
"addPlaceholder": "Введите для добавления или нажмите на предложения ниже",
"editWord": "Редактировать триггерное слово",
"editPlaceholder": "Редактировать триггерное слово",
"copyWord": "Копировать триггерное слово",
"deleteWord": "Удалить триггерное слово",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "через {count}д"
},
"badges": {
"current": "Текущая версия",
"current": "Открытая версия",
"currentTooltip": "Это версия, с которой было открыто это окно",
"inLibrary": "В библиотеке",
"inLibraryTooltip": "Эта версия есть в вашей локальной библиотеке",
"downloaded": "Загружено",
"downloadedTooltip": "Эта версия уже загружалась, но сейчас отсутствует в вашей библиотеке",
"newer": "Более новая версия",
"newerTooltip": "Эта версия новее вашей последней локальной версии",
"earlyAccess": "Ранний доступ",
"ignored": "Игнорируется"
"earlyAccessTooltip": "Для этой версии сейчас требуется ранний доступ Civitai",
"ignored": "Игнорируется",
"ignoredTooltip": "Уведомления об обновлениях для этой версии отключены",
"onSiteOnly": "Только на Сайте",
"onSiteOnlyTooltip": "Эта версия доступна только для генерации на сайте Civitai"
},
"actions": {
"download": "Скачать",
"downloadTooltip": "Скачать эту версию",
"downloadEarlyAccessTooltip": "Скачать эту версию раннего доступа с Civitai",
"downloadNotAllowedTooltip": "Эта версия доступна только для генерации на сайте Civitai",
"delete": "Удалить",
"deleteTooltip": "Удалить эту локальную версию",
"ignore": "Игнорировать",
"unignore": "Перестать игнорировать",
"ignoreTooltip": "Игнорировать уведомления об обновлениях для этой версии",
"unignoreTooltip": "Возобновить уведомления об обновлениях для этой версии",
"viewVersionOnCivitai": "Посмотреть версию на Civitai",
"earlyAccessTooltip": "Требуется покупка раннего доступа",
"resumeModelUpdates": "Возобновить обновления для этой модели",
"ignoreModelUpdates": "Игнорировать обновления для этой модели",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "Рецепт заменён в workflow",
"recipeFailedToSend": "Не удалось отправить рецепт в workflow",
"noMatchingNodes": "В текущем workflow нет совместимых узлов",
"noTargetNodeSelected": "Целевой узел не выбран"
"noTargetNodeSelected": "Целевой узел не выбран",
"modelUpdated": "Модель обновлена в workflow",
"modelFailed": "Не удалось обновить узел модели"
},
"nodeSelector": {
"recipe": "Рецепт",
@@ -1235,6 +1464,10 @@
"opened": "Папка с примерами изображений открыта",
"openingFolder": "Открытие папки с примерами изображений",
"failedToOpen": "Не удалось открыть папку с примерами изображений",
"copiedPath": "Путь скопирован в буфер обмена: {{path}}",
"clipboardFallback": "Путь: {{path}}",
"copiedUri": "Ссылка скопирована в буфер обмена: {{uri}}",
"uriClipboardFallback": "Ссылка: {{uri}}",
"setupRequired": "Хранилище примеров изображений",
"setupDescription": "Чтобы добавить собственные примеры изображений, сначала нужно установить место загрузки.",
"setupUsage": "Этот путь используется как для загруженных, так и для пользовательских примеров изображений.",
@@ -1342,7 +1575,14 @@
"showWechatQR": "Показать QR-код WeChat",
"hideWechatQR": "Скрыть QR-код WeChat"
},
"footer": "Спасибо за использование LoRA Manager! ❤️"
"footer": "Спасибо за использование LoRA Manager! ❤️",
"supporters": {
"title": "Спасибо всем сторонникам",
"subtitle": "Спасибо {count} сторонникам, которые сделали этот проект возможным",
"specialThanks": "Особая благодарность",
"allSupporters": "Все сторонники",
"totalCount": "Всего {count} сторонников"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "Пожалуйста, выберите версию",
"versionExists": "Эта версия уже существует в вашей библиотеке",
"downloadCompleted": "Загрузка успешно завершена",
"downloadSkippedByBaseModel": "Загрузка пропущена, потому что базовая модель {baseModel} исключена",
"autoOrganizeSuccess": "Автоматическая организация успешно завершена для {count} {type}",
"autoOrganizePartialSuccess": "Автоматическая организация завершена: перемещено {success}, не удалось {failures} из {total} моделей",
"autoOrganizeFailed": "Ошибка автоматической организации: {error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "Не удалось загрузить {modelType}s: {message}",
"refreshComplete": "Обновление завершено",
"refreshFailed": "Не удалось обновить рецепты: {message}",
"syncComplete": "Синхронизация завершена",
"syncFailed": "Не удалось синхронизировать рецепты: {message}",
"updateFailed": "Не удалось обновить рецепт: {error}",
"updateError": "Ошибка обновления рецепта: {message}",
"nameSaved": "Рецепт \"{name}\" успешно сохранен",
"nameUpdated": "Название рецепта успешно обновлено",
"tagsUpdated": "Теги рецепта успешно обновлены",
"sourceUrlUpdated": "Исходный URL успешно обновлен",
"promptUpdated": "Промпт успешно обновлён",
"negativePromptUpdated": "Негативный промпт успешно обновлён",
"promptEditorHint": "Нажмите Enter для сохранения, Shift+Enter для новой строки",
"noRecipeId": "ID рецепта недоступен",
"sendToWorkflowFailed": "Не удалось отправить рецепт в рабочий процесс: {message}",
"copyFailed": "Ошибка копирования синтаксиса рецепта: {message}",
"noMissingLoras": "Нет отсутствующих LoRAs для загрузки",
"missingLorasInfoFailed": "Не удалось получить информацию для отсутствующих LoRAs",
@@ -1410,9 +1657,20 @@
"processingError": "Ошибка обработки: {message}",
"folderBrowserError": "Ошибка загрузки браузера папок: {message}",
"recipeSaveFailed": "Не удалось сохранить рецепт: {error}",
"recipeSaved": "Recipe saved successfully",
"importFailed": "Импорт не удался: {message}",
"folderTreeFailed": "Не удалось загрузить дерево папок",
"folderTreeError": "Ошибка загрузки дерева папок"
"folderTreeError": "Ошибка загрузки дерева папок",
"batchImportFailed": "Failed to start batch import: {message}",
"batchImportCancelling": "Cancelling batch import...",
"batchImportCancelFailed": "Failed to cancel batch import: {message}",
"batchImportNoUrls": "Please enter at least one URL or file path",
"batchImportNoDirectory": "Please enter a directory path",
"batchImportBrowseFailed": "Failed to browse directory: {message}",
"batchImportDirectorySelected": "Directory selected: {path}",
"noRecipesSelected": "Рецепты не выбраны",
"noMissingLorasInSelection": "В выбранных рецептах не найдены отсутствующие LoRAs",
"noLoraRootConfigured": "Корневой каталог LoRA не настроен. Пожалуйста, установите корневой каталог LoRA по умолчанию в настройках."
},
"models": {
"noModelsSelected": "Модели не выбраны",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "Не удалось сохранить сопоставления базовых моделей: {message}",
"downloadTemplatesUpdated": "Шаблоны путей загрузки обновлены",
"downloadTemplatesFailed": "Не удалось сохранить шаблоны путей загрузки: {message}",
"recipesPathUpdated": "Путь хранения рецептов обновлён",
"recipesPathSaveFailed": "Не удалось обновить путь хранения рецептов: {message}",
"settingsUpdated": "Настройки обновлены: {setting}",
"compactModeToggled": "Компактный режим {state}",
"settingSaveFailed": "Не удалось сохранить настройку: {message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "Не удалось загрузить обученные слова",
"tooLong": "Триггерное слово не должно превышать 100 слов",
"tooMany": "Максимум 30 триггерных слов разрешено",
"tooLong": "Триггерное слово не должно превышать 500 слов",
"tooMany": "Максимум 100 триггерных слов разрешено",
"alreadyExists": "Это триггерное слово уже существует",
"updateSuccess": "Триггерные слова успешно обновлены",
"updateFailed": "Не удалось обновить триггерные слова",
@@ -1591,6 +1851,8 @@
"deleteFailed": "Не удалось удалить {type}: {message}",
"excludeSuccess": "{type} успешно исключен",
"excludeFailed": "Не удалось исключить {type}: {message}",
"restoreSuccess": "{type} успешно восстановлен",
"restoreFailed": "Не удалось восстановить {type}: {message}",
"fileNameUpdated": "Имя файла успешно обновлено",
"fileRenameFailed": "Не удалось переименовать файл: {error}",
"previewUpdated": "Превью успешно обновлено",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "Системная диагностика",
"title": "Доктор",
"buttonTitle": "Запустить диагностику и обычные исправления",
"loading": "Проверка окружения...",
"footer": "Экспортируйте диагностический пакет, если проблема сохраняется после исправления.",
"summary": {
"idle": "Выполнить проверку настроек, целостности кэша и согласованности интерфейса.",
"ok": "В текущем окружении активных проблем не обнаружено.",
"warning": "Обнаружено {count} проблем(ы). Большинство можно исправить прямо из этой панели.",
"error": "Перед тем как приложение станет полностью исправным, нужно устранить {count} проблем(ы)."
},
"status": {
"ok": "Исправно",
"warning": "Требует внимания",
"error": "Требуется действие"
},
"actions": {
"runAgain": "Запустить снова",
"exportBundle": "Экспортировать пакет"
},
"toast": {
"loadFailed": "Не удалось загрузить диагностику: {message}",
"repairSuccess": "Перестройка кэша завершена.",
"repairFailed": "Не удалось перестроить кэш: {message}",
"exportSuccess": "Диагностический пакет экспортирован.",
"exportFailed": "Не удалось экспортировать диагностический пакет: {message}",
"conflictsResolved": "Разрешено конфликтов имён файлов: {count}.",
"conflictsResolveFailed": "Не удалось разрешить конфликты имён файлов: {message}"
}
},
"banners": {
"versionMismatch": {
"title": "Обнаружено обновление приложения",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "取消",
"confirm": "确认",
"actions": {
"save": "保存",
"cancel": "取消",
"confirm": "确认",
"delete": "删除",
"move": "移动",
"refresh": "刷新",
@@ -11,7 +14,9 @@
"backToTop": "返回顶部",
"settings": "设置",
"help": "帮助",
"add": "添加"
"add": "添加",
"close": "关闭",
"menu": "菜单"
},
"status": {
"loading": "加载中...",
@@ -159,11 +164,11 @@
"error": "清理示例图片文件夹失败:{message}"
},
"fetchMissingLicenses": {
"label": "Refresh license metadata",
"loading": "Refreshing license metadata for {typePlural}...",
"success": "Updated license metadata for {count} {typePlural}",
"none": "All {typePlural} already have license metadata",
"error": "Failed to refresh license metadata for {typePlural}: {message}"
"label": "刷新许可证元数据",
"loading": "正在刷新 {typePlural} 的许可证元数据...",
"success": "已更新 {count} {typePlural} 的许可证元数据",
"none": "所有 {typePlural} 都已具备许可证元数据",
"error": "刷新 {typePlural} 的许可证元数据失败:{message}"
},
"repairRecipes": {
"label": "修复配方数据",
@@ -171,6 +176,9 @@
"success": "成功修复了 {count} 个配方。",
"cancelled": "修复已取消。已修复 {count} 个配方。",
"error": "配方修复失败:{message}"
},
"manageExcludedModels": {
"label": "管理已排除的模型"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "预设 \"{name}\" 已存在。是否覆盖?",
"presetNamePlaceholder": "预设名称...",
"baseModel": "基础模型",
"baseModelSearchPlaceholder": "搜索基础模型...",
"modelTags": "标签前20",
"modelTypes": "Model Types",
"modelTypes": "模型类型",
"license": "许可证",
"noCreditRequired": "无需署名",
"allowSellingGeneratedContent": "允许销售",
"noTags": "无标签",
"noBaseModelMatches": "没有基础模型符合当前搜索。",
"clearAll": "清除所有筛选",
"any": "任一",
"all": "全部",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai API 密钥",
"civitaiApiKeyPlaceholder": "请输入你的 Civitai API 密钥",
"civitaiApiKeyHelp": "用于从 Civitai 下载模型时的身份验证",
"civitaiHost": {
"label": "Civitai 站点",
"help": "选择使用“在 Civitai 中查看”时默认打开的 Civitai 站点。",
"options": {
"com": "civitai.com仅 SFW",
"red": "civitai.red无限制"
}
},
"downloadBackend": {
"label": "下载后端",
"help": "选择模型文件的下载方式。Python 使用内置下载器。aria2 使用实验性的外部下载进程。",
"options": {
"python": "Python内置",
"aria2": "aria2实验性"
}
},
"aria2cPath": {
"label": "aria2c 路径",
"help": "可选的 aria2c 可执行文件路径。留空则使用系统 PATH 中的 aria2c。",
"placeholder": "留空则使用 PATH 中的 aria2c"
},
"aria2HelpLink": "了解如何配置 aria2 下载后端",
"civitaiHostBanner": {
"title": "已提供 Civitai 站点偏好设置",
"content": "Civitai 现在使用 civitai.com 提供 SFW 内容,使用 civitai.red 提供无限制内容。你可以在设置中更改默认打开的站点。",
"openSettings": "打开设置"
},
"openSettingsFileLocation": {
"label": "打开设置文件夹",
"tooltip": "打开包含 settings.json 的文件夹",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "内容过滤",
"downloads": "下载",
"videoSettings": "视频设置",
"layoutSettings": "布局设置",
"misc": "其他",
"backup": "备份",
"folderSettings": "默认根目录",
"recipeSettings": "配方",
"extraFolderPaths": "额外文件夹路径",
"downloadPathTemplates": "下载路径模板",
"priorityTags": "优先标签",
@@ -287,7 +327,15 @@
"blurNsfwContent": "模糊 NSFW 内容",
"blurNsfwContentHelp": "模糊成熟NSFW内容预览图片",
"showOnlySfw": "仅显示 SFW 结果",
"showOnlySfwHelp": "浏览和搜索时过滤所有 NSFW 内容"
"showOnlySfwHelp": "浏览和搜索时过滤所有 NSFW 内容",
"matureBlurThreshold": "成人内容模糊阈值",
"matureBlurThresholdHelp": "设置当启用 NSFW 模糊时,从哪个评级级别开始模糊过滤。",
"matureBlurThresholdOptions": {
"pg13": "PG13 及以上",
"r": "R 及以上(默认)",
"x": "X 及以上",
"xxx": "仅 XXX"
}
},
"videoSettings": {
"autoplayOnHover": "悬停时自动播放视频",
@@ -311,6 +359,54 @@
"saveFailed": "无法保存跳过路径:{message}"
}
},
"backup": {
"autoEnabled": "自动备份",
"autoEnabledHelp": "每天创建一次本地快照,并按保留策略保留最新快照。",
"retention": "保留数量",
"retentionHelp": "在删除旧快照之前,要保留多少个自动快照。",
"management": "备份管理",
"managementHelp": "导出当前用户状态,或从备份归档中恢复。",
"scopeHelp": "备份你的设置、下载历史和模型更新状态。不包含模型文件或可重建的缓存。",
"locationSummary": "当前备份位置",
"openFolderButton": "打开备份文件夹",
"openFolderSuccess": "已打开备份文件夹",
"openFolderFailed": "无法打开备份文件夹",
"locationCopied": "备份路径已复制到剪贴板:{{path}}",
"locationClipboardFallback": "备份路径:{{path}}",
"exportButton": "导出备份",
"exportSuccess": "备份导出成功。",
"exportFailed": "备份导出失败:{message}",
"importButton": "导入备份",
"importConfirm": "导入此备份并覆盖本地用户状态吗?",
"importSuccess": "备份导入成功。",
"importFailed": "备份导入失败:{message}",
"latestSnapshot": "最近快照",
"latestAutoSnapshot": "最近自动快照",
"snapshotCount": "已保存快照",
"noneAvailable": "还没有快照"
},
"downloadSkipBaseModels": {
"label": "跳过这些基础模型的下载",
"help": "适用于所有下载流程。这里只能选择受支持的基础模型。",
"searchPlaceholder": "筛选基础模型...",
"empty": "没有与当前搜索匹配的基础模型。",
"summary": {
"none": "未选择",
"count": "已选择 {count} 项"
},
"actions": {
"edit": "编辑",
"collapse": "收起",
"clear": "清空"
},
"validation": {
"saveFailed": "无法保存已排除的基础模型:{message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "跳过已下载的模型版本",
"help": "启用后如果下载历史服务记录显示该版本已下载LoRA Manager 将跳过下载该模型版本。适用于所有下载流程。"
},
"layoutSettings": {
"displayDensity": "显示密度",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "悬停时显示"
},
"cardInfoDisplayHelp": "选择何时显示模型信息和操作按钮",
"showVersionOnCard": "在卡片上显示版本",
"showVersionOnCardHelp": "在模型卡片上显示或隐藏版本名称",
"modelCardFooterAction": "模型卡片按钮操作",
"modelCardFooterActionOptions": {
"exampleImages": "打开示例图片",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "设置下载、导入和移动时的默认 Diffusion Model (UNET) 根目录",
"defaultEmbeddingRoot": "Embedding 根目录",
"defaultEmbeddingRootHelp": "设置下载、导入和移动时的默认 Embedding 根目录",
"recipesPath": "配方存储路径",
"recipesPathHelp": "已保存配方的可选自定义目录。留空则使用第一个 LoRA 根目录下的 recipes 文件夹。",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "正在迁移配方存储...",
"noDefault": "无默认"
},
"extraFolderPaths": {
"title": "额外文件夹路径",
"description": "LoRA Manager 专属的额外模型根目录。从 ComfyUI 标准文件夹之外的位置加载模型,特别适合管理大型模型库,避免影响 ComfyUI 性能。",
"restartRequired": "需要重启才能生效",
"modelTypes": {
"lora": "LoRA 路径",
"checkpoint": "Checkpoint 路径",
"unet": "Diffusion 模型路径",
"embedding": "Embedding 路径"
},
"pathPlaceholder": "/额外/模型/路径",
"saveSuccess": "额外文件夹路径已更新,需要重启才能生效。",
"saveError": "更新额外文件夹路径失败:{message}",
"validation": {
"duplicatePath": "此路径已配置"
}
},
"priorityTags": {
"title": "优先标签",
"description": "为每种模型类型自定义标签优先级顺序 (例如: character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "输入保存从 Civitai 下载的示例图片的文件夹路径",
"autoDownload": "自动下载示例图片",
"autoDownloadHelp": "自动为没有示例图片的模型下载示例图片(需设置下载位置)",
"openMode": "打开示例图片操作",
"openModeHelp": "选择是在服务器上打开、复制映射后的本地路径,还是启动自定义 URI。",
"openModeOptions": {
"system": "在服务器上打开",
"clipboard": "复制本地路径",
"uriTemplate": "打开自定义 URI"
},
"localRoot": "本地示例图片根目录",
"localRootHelp": "可选的本地或挂载根目录,用于映射服务器上的示例图片目录。若留空,则复用服务器路径。",
"localRootPlaceholder": "例如:/Volumes/ComfyUI/example_images",
"uriTemplate": "打开 URI 模板",
"uriTemplateHelp": "使用自定义深链接,例如文件 URI 或 Shortcuts 链接。",
"uriTemplatePlaceholder": "例如shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "可用占位符:{{local_path}}、{{encoded_local_path}}、{{relative_path}}、{{encoded_relative_path}}、{{file_uri}}、{{encoded_file_uri}}",
"openModeWikiLink": "了解远程打开模式",
"optimizeImages": "优化下载图片",
"optimizeImagesHelp": "优化示例图片以减少文件大小并提升加载速度(保留元数据)",
"download": "下载",
@@ -485,23 +619,6 @@
"proxyPassword": "密码 (可选)",
"proxyPasswordPlaceholder": "密码",
"proxyPasswordHelp": "代理认证的密码 (如果需要)"
},
"extraFolderPaths": {
"title": "额外文件夹路径",
"help": "在 ComfyUI 的标准路径之外添加额外的模型文件夹。这些路径单独存储,并与默认文件夹一起扫描。",
"description": "配置额外的文件夹以扫描模型。这些路径是 LoRA Manager 特有的,将与 ComfyUI 的默认路径合并。",
"modelTypes": {
"lora": "LoRA 路径",
"checkpoint": "Checkpoint 路径",
"unet": "Diffusion 模型路径",
"embedding": "Embedding 路径"
},
"pathPlaceholder": "/额外/模型/路径",
"saveSuccess": "额外文件夹路径已更新。",
"saveError": "更新额外文件夹路径失败:{message}",
"validation": {
"duplicatePath": "此路径已配置"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "自动整理所选模型",
"skipMetadataRefresh": "跳过所选模型的元数据刷新",
"resumeMetadataRefresh": "恢复所选模型的元数据刷新",
"deleteAll": "删除选中模型",
"deleteAll": "删除选",
"downloadMissingLoras": "下载缺失的 LoRAs",
"clear": "清除选择",
"skipMetadataRefreshCount": "跳过({count} 个模型)",
"resumeMetadataRefreshCount": "恢复({count} 个模型)",
@@ -600,6 +718,7 @@
"moveToFolder": "移动到文件夹",
"repairMetadata": "修复元数据",
"excludeModel": "排除模型",
"restoreModel": "恢复模型",
"deleteModel": "删除模型",
"shareRecipe": "分享配方",
"viewAllLoras": "查看所有 LoRA",
@@ -618,9 +737,9 @@
"title": "从图片或 URL 导入配方",
"urlLocalPath": "URL / 本地路径",
"uploadImage": "上传图片",
"urlSectionDescription": "输入 Civitai 图片 URL 或本地文件路径以导入为配方。",
"urlSectionDescription": "输入来自 civitai.com 或 civitai.red 的 Civitai 图片 URL或本地文件路径以导入为配方。",
"imageUrlOrPath": "图片 URL 或文件路径:",
"urlPlaceholder": "https://civitai.com/images/... 或 C:/path/to/image.png",
"urlPlaceholder": "https://civitai.com/images/... 或 https://civitai.red/images/... 或 C:/path/to/image.png",
"fetchImage": "获取图片",
"uploadSectionDescription": "上传带有 LoRA 元数据的图片以导入为配方。",
"selectImage": "选择图片",
@@ -641,6 +760,8 @@
"root": "根目录",
"browseFolders": "浏览文件夹:",
"downloadAndSaveRecipe": "下载并保存配方",
"importRecipeOnly": "仅导入配方",
"importAndDownload": "导入并下载",
"downloadMissingLoras": "下载缺失的 LoRA",
"saveRecipe": "保存配方",
"loraCountInfo": "({existing}/{total} in library)",
@@ -682,7 +803,11 @@
"lorasCountAsc": "最少"
},
"refresh": {
"title": "刷新配方列表"
"title": "刷新配方列表",
"quick": "同步变更",
"quickTooltip": "同步变更 - 快速刷新而不重建缓存",
"full": "重建缓存",
"fullTooltip": "重建缓存 - 重新扫描所有配方文件"
},
"filteredByLora": "按 LoRA 筛选",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "修复配方失败:{message}",
"missingId": "无法修复配方:缺少配方 ID"
}
},
"batchImport": {
"title": "批量导入配方",
"action": "批量导入",
"urlList": "URL 列表",
"directory": "目录",
"urlDescription": "输入图像 URL 或本地文件路径(每行一个)。每个都将作为配方导入。",
"directoryDescription": "输入目录路径以导入该文件夹中的所有图片。",
"urlsLabel": "图片 URL 或本地路径",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "每行输入一个 URL 或路径",
"directoryPath": "目录路径",
"directoryPlaceholder": "/图片/文件夹/路径",
"browse": "浏览",
"recursive": "包含子目录",
"tagsOptional": "标签(可选,应用于所有配方)",
"tagsPlaceholder": "输入以逗号分隔的标签",
"tagsHint": "标签将被添加到所有导入的配方中",
"skipNoMetadata": "跳过无元数据的图片",
"skipNoMetadataHelp": "没有 LoRA 元数据的图片将自动跳过。",
"start": "开始导入",
"startImport": "开始导入",
"importing": "正在导入配方...",
"progress": "进度",
"total": "总计",
"success": "成功",
"failed": "失败",
"skipped": "跳过",
"current": "当前",
"currentItem": "当前",
"preparing": "准备中...",
"cancel": "取消",
"cancelImport": "取消",
"cancelled": "批量导入已取消",
"completed": "导入完成",
"completedWithErrors": "导入完成但有错误",
"completedSuccess": "成功导入 {count} 个配方",
"successCount": "成功",
"failedCount": "失败",
"skippedCount": "跳过",
"totalProcessed": "总计处理",
"viewDetails": "查看详情",
"newImport": "新建导入",
"manualPathEntry": "请手动输入目录路径。此浏览器中文件浏览器不可用。",
"batchImportDirectorySelected": "已选择目录:{path}",
"batchImportManualEntryRequired": "文件浏览器不可用。请手动输入目录路径。",
"backToParent": "返回上级目录",
"folders": "文件夹",
"folderCount": "{count} 个文件夹",
"imageFiles": "图像文件",
"images": "图像",
"imageCount": "{count} 个图像",
"selectFolder": "选择此文件夹",
"errors": {
"enterUrls": "请至少输入一个 URL 或路径",
"enterDirectory": "请输入目录路径",
"startFailed": "启动导入失败:{message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "移动到 {otherType} 文件夹"
"moveToOtherTypeFolder": "移动到 {otherType} 文件夹",
"sendToWorkflow": "发送到工作流"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "取消固定侧边栏",
"switchToListView": "切换到列表视图",
"switchToTreeView": "切换到树状视图",
"recursiveOn": "搜索子文件夹",
"recursiveOff": "仅搜索当前文件夹",
"recursiveOn": "包含子文件夹",
"recursiveOff": "仅当前文件夹",
"recursiveUnavailable": "仅在树形视图中可使用递归搜索",
"collapseAllDisabled": "列表视图下不可用",
"dragDrop": {
"unableToResolveRoot": "无法确定移动的目标路径。",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Move is not supported for this item.",
"createFolderHint": "释放以创建新文件夹",
"newFolderName": "新文件夹名称",
"folderNameHint": "按 Enter 确认Escape 取消",
"emptyFolderName": "请输入文件夹名称",
"invalidFolderName": "文件夹名称包含无效字符",
"noDragState": "未找到待处理的拖放操作"
},
"empty": {
"noFolders": "未找到文件夹",
"dragHint": "拖拽项目到此处以创建文件夹"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "早期访问",
"earlyAccessTooltip": "需要早期访问权限",
"inLibrary": "已在库中",
"downloaded": "已下载",
"downloadedTooltip": "之前已下载,但当前不在你的库中。",
"alreadyInLibrary": "已存在于库中",
"autoOrganizedPath": "【已按路径模板自动整理】",
"errors": {
@@ -905,6 +1101,14 @@
"save": "更新基础模型",
"cancel": "取消"
},
"bulkDownloadMissingLoras": {
"title": "下载缺失的 LoRAs",
"message": "发现 {uniqueCount} 个独特的缺失 LoRAs从选定配方中的 {totalCount} 个总数)。",
"previewTitle": "要下载的 LoRAs",
"moreItems": "...还有 {count} 个",
"note": "文件将使用默认路径模板下载。根据 LoRAs 的数量,这可能需要一些时间。",
"downloadButton": "下载 {count} 个 LoRA(s)"
},
"exampleAccess": {
"title": "本地示例图片",
"message": "未找到此模型的本地示例图片。可选操作:",
@@ -938,9 +1142,9 @@
},
"proceedText": "仅在你确定需要此操作时继续。",
"urlLabel": "Civitai 模型 URL",
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676 或 https://civitai.red/models/649516/model-name?modelVersionId=726676",
"helpText": {
"title": "粘贴任意 Civitai 模型 URL。支持格式",
"title": "粘贴任意来自 civitai.com 或 civitai.red 的 Civitai 模型 URL。支持格式",
"format1": "https://civitai.com/models/649516",
"format2": "https://civitai.com/models/649516?modelVersionId=726676",
"format3": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "在 Civitai 查看",
"viewOnCivitaiText": "在 Civitai 查看",
"viewCreatorProfile": "查看创作者主页",
"openFileLocation": "打开文件位置"
"openFileLocation": "打开文件位置",
"sendToWorkflow": "发送到 ComfyUI",
"sendToWorkflowText": "发送到 ComfyUI"
},
"openFileLocation": {
"success": "文件位置已成功打开",
@@ -964,6 +1170,9 @@
"copied": "路径已复制到剪贴板:{{path}}",
"clipboardFallback": "路径:{{path}}"
},
"sendToWorkflow": {
"noFilePath": "无法发送到 ComfyUI没有可用的文件路径"
},
"metadata": {
"version": "版本",
"fileName": "文件名",
@@ -1000,6 +1209,8 @@
"cancel": "取消编辑",
"save": "保存更改",
"addPlaceholder": "输入或点击下方建议添加",
"editWord": "编辑触发词",
"editPlaceholder": "编辑触发词",
"copyWord": "复制触发词",
"deleteWord": "删除触发词",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "{count}天后"
},
"badges": {
"current": "当前版本",
"current": "已打开版本",
"currentTooltip": "这是你用来打开此弹窗的版本",
"inLibrary": "已在库中",
"inLibraryTooltip": "此版本已存在于你的本地库中",
"downloaded": "已下载",
"downloadedTooltip": "此版本之前下载过,但当前不在你的本地库中",
"newer": "较新的版本",
"newerTooltip": "此版本比你本地的最新版本更新",
"earlyAccess": "抢先体验",
"ignored": "已忽略"
"earlyAccessTooltip": "此版本当前需要 Civitai 抢先体验权限",
"ignored": "已忽略",
"ignoredTooltip": "此版本已关闭更新通知",
"onSiteOnly": "仅站内生成",
"onSiteOnlyTooltip": "此版本仅在 Civitai 站内可用,无法下载"
},
"actions": {
"download": "下载",
"downloadTooltip": "下载此版本",
"downloadEarlyAccessTooltip": "从 Civitai 下载此抢先体验版本",
"downloadNotAllowedTooltip": "此版本仅在 Civitai 站内可用,无法下载",
"delete": "删除",
"deleteTooltip": "删除此本地版本",
"ignore": "忽略",
"unignore": "取消忽略",
"ignoreTooltip": "忽略此版本的更新通知",
"unignoreTooltip": "恢复此版本的更新通知",
"viewVersionOnCivitai": "在 Civitai 上查看版本",
"earlyAccessTooltip": "需要购买抢先体验",
"resumeModelUpdates": "继续跟踪该模型的更新",
"ignoreModelUpdates": "忽略该模型的更新",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "配方已替换到工作流",
"recipeFailedToSend": "发送配方到工作流失败",
"noMatchingNodes": "当前工作流中没有兼容的节点",
"noTargetNodeSelected": "未选择目标节点"
"noTargetNodeSelected": "未选择目标节点",
"modelUpdated": "模型已更新到工作流",
"modelFailed": "更新模型节点失败"
},
"nodeSelector": {
"recipe": "配方",
@@ -1235,6 +1464,10 @@
"opened": "示例图片文件夹已打开",
"openingFolder": "正在打开示例图片文件夹",
"failedToOpen": "打开示例图片文件夹失败",
"copiedPath": "路径已复制到剪贴板:{{path}}",
"clipboardFallback": "路径:{{path}}",
"copiedUri": "链接已复制到剪贴板:{{uri}}",
"uriClipboardFallback": "链接:{{uri}}",
"setupRequired": "示例图片存储",
"setupDescription": "要添加自定义示例图片,您需要先设置下载位置。",
"setupUsage": "此路径用于存储下载的示例图片和自定义图片。",
@@ -1342,7 +1575,14 @@
"showWechatQR": "显示微信二维码",
"hideWechatQR": "隐藏微信二维码"
},
"footer": "感谢使用 LoRA 管理器!❤️"
"footer": "感谢使用 LoRA 管理器!❤️",
"supporters": {
"title": "感谢所有支持者",
"subtitle": "感谢 {count} 位支持者让这个项目成为可能",
"specialThanks": "特别感谢",
"allSupporters": "所有支持者",
"totalCount": "共 {count} 位支持者"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "请选择版本",
"versionExists": "该版本已存在于你的库中",
"downloadCompleted": "下载成功完成",
"downloadSkippedByBaseModel": "由于基础模型 {baseModel} 已被排除,已跳过下载",
"autoOrganizeSuccess": "自动整理已成功完成,共 {count} 个 {type}",
"autoOrganizePartialSuccess": "自动整理完成:已移动 {success} 个,{failures} 个失败,共 {total} 个模型",
"autoOrganizeFailed": "自动整理失败:{error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "加载 {modelType} 失败:{message}",
"refreshComplete": "刷新完成",
"refreshFailed": "刷新配方失败:{message}",
"syncComplete": "同步完成",
"syncFailed": "同步配方失败:{message}",
"updateFailed": "更新配方失败:{error}",
"updateError": "更新配方出错:{message}",
"nameSaved": "配方“{name}”保存成功",
"nameUpdated": "配方名称更新成功",
"tagsUpdated": "配方标签更新成功",
"sourceUrlUpdated": "来源 URL 更新成功",
"promptUpdated": "提示词更新成功",
"negativePromptUpdated": "负面提示词更新成功",
"promptEditorHint": "按 Enter 保存Shift+Enter 换行",
"noRecipeId": "无配方 ID",
"sendToWorkflowFailed": "发送配方到工作流失败:{message}",
"copyFailed": "复制配方语法出错:{message}",
"noMissingLoras": "没有缺失的 LoRA 可下载",
"missingLorasInfoFailed": "获取缺失 LoRA 信息失败",
@@ -1410,9 +1657,20 @@
"processingError": "处理出错:{message}",
"folderBrowserError": "加载文件夹浏览器出错:{message}",
"recipeSaveFailed": "保存配方失败:{error}",
"recipeSaved": "配方保存成功",
"importFailed": "导入失败:{message}",
"folderTreeFailed": "加载文件夹树失败",
"folderTreeError": "加载文件夹树出错"
"folderTreeError": "加载文件夹树出错",
"batchImportFailed": "启动批量导入失败:{message}",
"batchImportCancelling": "正在取消批量导入...",
"batchImportCancelFailed": "取消批量导入失败:{message}",
"batchImportNoUrls": "请输入至少一个 URL 或文件路径",
"batchImportNoDirectory": "请输入目录路径",
"batchImportBrowseFailed": "浏览目录失败:{message}",
"batchImportDirectorySelected": "已选择目录:{path}",
"noRecipesSelected": "未选择任何配方",
"noMissingLorasInSelection": "在选定的配方中未找到缺失的 LoRAs",
"noLoraRootConfigured": "未配置 LoRA 根目录。请在设置中设置默认的 LoRA 根目录。"
},
"models": {
"noModelsSelected": "未选中模型",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "保存基础模型映射失败:{message}",
"downloadTemplatesUpdated": "下载路径模板已更新",
"downloadTemplatesFailed": "保存下载路径模板失败:{message}",
"recipesPathUpdated": "配方存储路径已更新",
"recipesPathSaveFailed": "更新配方存储路径失败:{message}",
"settingsUpdated": "设置已更新:{setting}",
"compactModeToggled": "紧凑模式 {state}",
"settingSaveFailed": "保存设置失败:{message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "无法加载训练词",
"tooLong": "触发词不能超过100个词",
"tooMany": "最多允许30个触发词",
"tooLong": "触发词不能超过500个词",
"tooMany": "最多允许100个触发词",
"alreadyExists": "该触发词已存在",
"updateSuccess": "触发词更新成功",
"updateFailed": "触发词更新失败",
@@ -1591,6 +1851,8 @@
"deleteFailed": "删除 {type} 失败:{message}",
"excludeSuccess": "{type} 排除成功",
"excludeFailed": "排除 {type} 失败:{message}",
"restoreSuccess": "{type} 已成功恢复",
"restoreFailed": "恢复 {type} 失败:{message}",
"fileNameUpdated": "文件名更新成功",
"fileRenameFailed": "重命名文件失败:{error}",
"previewUpdated": "预览图片更新成功",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "系统诊断",
"title": "医生",
"buttonTitle": "运行诊断并尝试修复常见问题",
"loading": "正在检查当前环境...",
"footer": "如果修复后问题仍然存在,可以导出诊断包进一步排查。",
"summary": {
"idle": "检查设置、缓存健康状况和前后端 UI 版本是否一致。",
"ok": "当前环境未发现活动问题。",
"warning": "发现 {count} 个问题,大多数可以直接在这里处理。",
"error": "发现 {count} 个需要尽快处理的问题。"
},
"status": {
"ok": "健康",
"warning": "需要关注",
"error": "需要处理"
},
"actions": {
"runAgain": "重新检查",
"exportBundle": "导出诊断包"
},
"toast": {
"loadFailed": "加载诊断结果失败:{message}",
"repairSuccess": "缓存重建完成。",
"repairFailed": "缓存重建失败:{message}",
"exportSuccess": "诊断包已导出。",
"exportFailed": "导出诊断包失败:{message}",
"conflictsResolved": "已解决 {count} 个文件名冲突。",
"conflictsResolveFailed": "解决文件名冲突失败:{message}"
}
},
"banners": {
"versionMismatch": {
"title": "检测到应用更新",

View File

@@ -1,8 +1,11 @@
{
"common": {
"cancel": "取消",
"confirm": "確認",
"actions": {
"save": "儲存",
"cancel": "取消",
"confirm": "確認",
"delete": "刪除",
"move": "移動",
"refresh": "重新整理",
@@ -11,7 +14,9 @@
"backToTop": "回到頂部",
"settings": "設定",
"help": "說明",
"add": "新增"
"add": "新增",
"close": "關閉",
"menu": "選單"
},
"status": {
"loading": "載入中...",
@@ -171,6 +176,9 @@
"success": "成功修復 {count} 個配方。",
"cancelled": "修復已取消。已修復 {count} 個配方。",
"error": "配方修復失敗:{message}"
},
"manageExcludedModels": {
"label": "管理已排除的模型"
}
},
"header": {
@@ -218,12 +226,14 @@
"presetOverwriteConfirm": "預設 \"{name}\" 已存在。是否覆蓋?",
"presetNamePlaceholder": "預設名稱...",
"baseModel": "基礎模型",
"baseModelSearchPlaceholder": "搜尋基礎模型...",
"modelTags": "標籤(前 20",
"modelTypes": "Model Types",
"modelTypes": "模型類型",
"license": "授權",
"noCreditRequired": "無需署名",
"allowSellingGeneratedContent": "允許銷售",
"noTags": "無標籤",
"noBaseModelMatches": "沒有基礎模型符合目前的搜尋。",
"clearAll": "清除所有篩選",
"any": "任一",
"all": "全部",
@@ -246,6 +256,33 @@
"civitaiApiKey": "Civitai API 金鑰",
"civitaiApiKeyPlaceholder": "請輸入您的 Civitai API 金鑰",
"civitaiApiKeyHelp": "用於從 Civitai 下載模型時的身份驗證",
"civitaiHost": {
"label": "Civitai 站點",
"help": "選擇使用「在 Civitai 中查看」時預設開啟的 Civitai 站點。",
"options": {
"com": "civitai.com僅 SFW",
"red": "civitai.red無限制"
}
},
"downloadBackend": {
"label": "下載後端",
"help": "選擇模型檔案的下載方式。Python 使用內建下載器。aria2 使用實驗性的外部下載程序。",
"options": {
"python": "Python內建",
"aria2": "aria2實驗性"
}
},
"aria2cPath": {
"label": "aria2c 路徑",
"help": "可選的 aria2c 可執行檔路徑。留空則使用系統 PATH 中的 aria2c。",
"placeholder": "留空則使用 PATH 中的 aria2c"
},
"aria2HelpLink": "了解如何設定 aria2 下載後端",
"civitaiHostBanner": {
"title": "已提供 Civitai 站點偏好設定",
"content": "Civitai 現在使用 civitai.com 提供 SFW 內容,使用 civitai.red 提供無限制內容。你可以在設定中變更預設開啟的站點。",
"openSettings": "開啟設定"
},
"openSettingsFileLocation": {
"label": "開啟設定資料夾",
"tooltip": "開啟包含 settings.json 的資料夾",
@@ -256,10 +293,13 @@
},
"sections": {
"contentFiltering": "內容過濾",
"downloads": "下載",
"videoSettings": "影片設定",
"layoutSettings": "版面設定",
"misc": "其他",
"backup": "備份",
"folderSettings": "預設根目錄",
"recipeSettings": "配方",
"extraFolderPaths": "額外資料夾路徑",
"downloadPathTemplates": "下載路徑範本",
"priorityTags": "優先標籤",
@@ -287,7 +327,15 @@
"blurNsfwContent": "模糊 NSFW 內容",
"blurNsfwContentHelp": "模糊成熟NSFW內容預覽圖片",
"showOnlySfw": "僅顯示 SFW 結果",
"showOnlySfwHelp": "瀏覽和搜尋時過濾所有 NSFW 內容"
"showOnlySfwHelp": "瀏覽和搜尋時過濾所有 NSFW 內容",
"matureBlurThreshold": "成人內容模糊閾值",
"matureBlurThresholdHelp": "設定當啟用 NSFW 模糊時,從哪個評級級別開始模糊過濾。",
"matureBlurThresholdOptions": {
"pg13": "PG13 及以上",
"r": "R 及以上(預設)",
"x": "X 及以上",
"xxx": "僅 XXX"
}
},
"videoSettings": {
"autoplayOnHover": "滑鼠懸停自動播放影片",
@@ -311,6 +359,54 @@
"saveFailed": "無法儲存跳過路徑:{message}"
}
},
"backup": {
"autoEnabled": "自動備份",
"autoEnabledHelp": "每天建立一次本地快照,並依保留政策保留最新快照。",
"retention": "保留數量",
"retentionHelp": "在刪除舊快照之前,要保留多少自動快照。",
"management": "備份管理",
"managementHelp": "匯出目前的使用者狀態,或從備份封存中還原。",
"scopeHelp": "備份你的設定、下載歷史與模型更新狀態。不包含模型檔案或可重建的快取。",
"locationSummary": "目前備份位置",
"openFolderButton": "開啟備份資料夾",
"openFolderSuccess": "已開啟備份資料夾",
"openFolderFailed": "無法開啟備份資料夾",
"locationCopied": "備份路徑已複製到剪貼簿:{{path}}",
"locationClipboardFallback": "備份路徑:{{path}}",
"exportButton": "匯出備份",
"exportSuccess": "備份匯出成功。",
"exportFailed": "備份匯出失敗:{message}",
"importButton": "匯入備份",
"importConfirm": "要匯入此備份並覆寫本機使用者狀態嗎?",
"importSuccess": "備份匯入成功。",
"importFailed": "備份匯入失敗:{message}",
"latestSnapshot": "最近快照",
"latestAutoSnapshot": "最近自動快照",
"snapshotCount": "已儲存快照",
"noneAvailable": "目前還沒有快照"
},
"downloadSkipBaseModels": {
"label": "跳過這些基礎模型的下載",
"help": "適用於所有下載流程。這裡只能選擇受支援的基礎模型。",
"searchPlaceholder": "篩選基礎模型...",
"empty": "沒有符合目前搜尋條件的基礎模型。",
"summary": {
"none": "未選擇",
"count": "已選擇 {count} 項"
},
"actions": {
"edit": "編輯",
"collapse": "收起",
"clear": "清空"
},
"validation": {
"saveFailed": "無法儲存已排除的基礎模型:{message}"
}
},
"skipPreviouslyDownloadedModelVersions": {
"label": "跳過已下載的模型版本",
"help": "啟用後如果下載歷史服務記錄顯示該版本已下載LoRA Manager 將跳過下載該模型版本。適用於所有下載流程。"
},
"layoutSettings": {
"displayDensity": "顯示密度",
"displayDensityOptions": {
@@ -333,6 +429,8 @@
"hover": "滑鼠懸停顯示"
},
"cardInfoDisplayHelp": "選擇何時顯示模型資訊與操作按鈕",
"showVersionOnCard": "在卡片上顯示版本",
"showVersionOnCardHelp": "在模型卡片上顯示或隱藏版本名稱",
"modelCardFooterAction": "模型卡片按鈕操作",
"modelCardFooterActionOptions": {
"exampleImages": "開啟範例圖片",
@@ -359,8 +457,29 @@
"defaultUnetRootHelp": "設定下載、匯入和移動時的預設 Diffusion Model (UNET) 根目錄",
"defaultEmbeddingRoot": "Embedding 根目錄",
"defaultEmbeddingRootHelp": "設定下載、匯入和移動時的預設 Embedding 根目錄",
"recipesPath": "配方儲存路徑",
"recipesPathHelp": "已儲存配方的可選自訂目錄。留空則使用第一個 LoRA 根目錄下的 recipes 資料夾。",
"recipesPathPlaceholder": "/path/to/recipes",
"recipesPathMigrating": "正在遷移配方儲存...",
"noDefault": "未設定預設"
},
"extraFolderPaths": {
"title": "額外資料夾路徑",
"description": "LoRA Manager 專屬的額外模型根目錄。從 ComfyUI 標準資料夾之外的位置載入模型,特別適合管理大型模型庫,避免影響 ComfyUI 效能。",
"restartRequired": "Requires restart to take effect",
"modelTypes": {
"lora": "LoRA 路徑",
"checkpoint": "Checkpoint 路徑",
"unet": "Diffusion 模型路徑",
"embedding": "Embedding 路徑"
},
"pathPlaceholder": "/額外/模型/路徑",
"saveSuccess": "額外資料夾路徑已更新,需要重啟才能生效。",
"saveError": "更新額外資料夾路徑失敗:{message}",
"validation": {
"duplicatePath": "此路徑已設定"
}
},
"priorityTags": {
"title": "優先標籤",
"description": "為每種模型類型自訂標籤的優先順序 (例如: character, concept, style(toon|toon_style))",
@@ -423,6 +542,21 @@
"downloadLocationHelp": "輸入從 Civitai 下載範例圖片要儲存的資料夾路徑",
"autoDownload": "自動下載範例圖片",
"autoDownloadHelp": "自動為沒有範例圖片的模型下載範例圖片(需設定下載位置)",
"openMode": "開啟範例圖片動作",
"openModeHelp": "選擇是在伺服器上開啟、複製對應的本機路徑,或啟動自訂 URI。",
"openModeOptions": {
"system": "在伺服器上開啟",
"clipboard": "複製本機路徑",
"uriTemplate": "開啟自訂 URI"
},
"localRoot": "本機範例圖片根目錄",
"localRootHelp": "可選的本機或掛載根目錄,用於對應伺服器上的範例圖片目錄。若留白,則會重用伺服器路徑。",
"localRootPlaceholder": "例如:/Volumes/ComfyUI/example_images",
"uriTemplate": "開啟 URI 範本",
"uriTemplateHelp": "使用自訂深層連結,例如檔案 URI 或 Shortcuts 連結。",
"uriTemplatePlaceholder": "例如shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
"uriTemplatePlaceholders": "可用佔位符:{{local_path}}、{{encoded_local_path}}、{{relative_path}}、{{encoded_relative_path}}、{{file_uri}}、{{encoded_file_uri}}",
"openModeWikiLink": "了解遠端開啟模式",
"optimizeImages": "最佳化下載圖片",
"optimizeImagesHelp": "最佳化範例圖片以減少檔案大小並提升載入速度(會保留原有的 metadata",
"download": "下載",
@@ -485,23 +619,6 @@
"proxyPassword": "密碼(選填)",
"proxyPasswordPlaceholder": "password",
"proxyPasswordHelp": "代理驗證所需的密碼(如有需要)"
},
"extraFolderPaths": {
"title": "額外資料夾路徑",
"help": "在 ComfyUI 的標準路徑之外新增額外的模型資料夾。這些路徑單獨儲存,並與預設資料夾一起掃描。",
"description": "設定額外的資料夾以掃描模型。這些路徑是 LoRA Manager 特有的,將與 ComfyUI 的預設路徑合併。",
"modelTypes": {
"lora": "LoRA 路徑",
"checkpoint": "Checkpoint 路徑",
"unet": "Diffusion 模型路徑",
"embedding": "Embedding 路徑"
},
"pathPlaceholder": "/額外/模型/路徑",
"saveSuccess": "額外資料夾路徑已更新。",
"saveError": "更新額外資料夾路徑失敗:{message}",
"validation": {
"duplicatePath": "此路徑已設定"
}
}
},
"loras": {
@@ -570,7 +687,8 @@
"autoOrganize": "自動整理所選模型",
"skipMetadataRefresh": "跳過所選模型的元數據更新",
"resumeMetadataRefresh": "恢復所選模型的元數據更新",
"deleteAll": "刪除全部模型",
"deleteAll": "刪除所選",
"downloadMissingLoras": "下載缺失的 LoRAs",
"clear": "清除選取",
"skipMetadataRefreshCount": "跳過({count} 個模型)",
"resumeMetadataRefreshCount": "恢復({count} 個模型)",
@@ -600,6 +718,7 @@
"moveToFolder": "移動到資料夾",
"repairMetadata": "修復元數據",
"excludeModel": "排除模型",
"restoreModel": "還原模型",
"deleteModel": "刪除模型",
"shareRecipe": "分享配方",
"viewAllLoras": "檢視全部 LoRA",
@@ -641,6 +760,8 @@
"root": "根目錄",
"browseFolders": "瀏覽資料夾:",
"downloadAndSaveRecipe": "下載並儲存配方",
"importRecipeOnly": "僅匯入配方",
"importAndDownload": "匯入並下載",
"downloadMissingLoras": "下載缺少的 LoRA",
"saveRecipe": "儲存配方",
"loraCountInfo": "(庫存 {existing}/{total}",
@@ -682,7 +803,11 @@
"lorasCountAsc": "最少"
},
"refresh": {
"title": "重新整理配方列表"
"title": "重新整理配方列表",
"quick": "同步變更",
"quickTooltip": "同步變更 - 快速重新整理而不重建快取",
"full": "重建快取",
"fullTooltip": "重建快取 - 重新掃描所有配方檔案"
},
"filteredByLora": "已依 LoRA 篩選",
"favorites": {
@@ -722,6 +847,64 @@
"failed": "修復配方失敗:{message}",
"missingId": "無法修復配方:缺少配方 ID"
}
},
"batchImport": {
"title": "批量匯入配方",
"action": "批量匯入",
"urlList": "URL 列表",
"directory": "目錄",
"urlDescription": "輸入圖像 URL 或本地檔案路徑(每行一個)。每個都將作為配方匯入。",
"directoryDescription": "輸入目錄路徑以匯入該資料夾中的所有圖像。",
"urlsLabel": "圖像 URL 或本地路徑",
"urlsPlaceholder": "https://civitai.com/images/...\nhttps://civitai.com/images/...\nC:/path/to/image.png\n...",
"urlsHint": "每行輸入一個 URL 或路徑",
"directoryPath": "目錄路徑",
"directoryPlaceholder": "/path/to/images/folder",
"browse": "瀏覽",
"recursive": "包含子目錄",
"tagsOptional": "標籤(可選,應用於所有配方)",
"tagsPlaceholder": "輸入以逗號分隔的標籤",
"tagsHint": "標籤將被添加到所有匯入的配方中",
"skipNoMetadata": "跳過無元資料的圖像",
"skipNoMetadataHelp": "沒有 LoRA 元資料的圖像將被自動跳過。",
"start": "開始匯入",
"startImport": "開始匯入",
"importing": "匯入中...",
"progress": "進度",
"total": "總計",
"success": "成功",
"failed": "失敗",
"skipped": "跳過",
"current": "當前",
"currentItem": "當前項目",
"preparing": "準備中...",
"cancel": "取消",
"cancelImport": "取消匯入",
"cancelled": "匯入已取消",
"completed": "匯入完成",
"completedWithErrors": "匯入完成但有錯誤",
"completedSuccess": "成功匯入 {count} 個配方",
"successCount": "成功",
"failedCount": "失敗",
"skippedCount": "跳過",
"totalProcessed": "總計處理",
"viewDetails": "查看詳情",
"newImport": "新建匯入",
"manualPathEntry": "請手動輸入目錄路徑。此瀏覽器中檔案瀏覽器不可用。",
"batchImportDirectorySelected": "已選擇目錄:{path}",
"batchImportManualEntryRequired": "檔案瀏覽器不可用。請手動輸入目錄路徑。",
"backToParent": "返回上級目錄",
"folders": "資料夾",
"folderCount": "{count} 個資料夾",
"imageFiles": "圖像檔案",
"images": "圖像",
"imageCount": "{count} 個圖像",
"selectFolder": "選擇此資料夾",
"errors": {
"enterUrls": "請輸入至少一個 URL 或路徑",
"enterDirectory": "請輸入目錄路徑",
"startFailed": "啟動匯入失敗:{message}"
}
}
},
"checkpoints": {
@@ -731,7 +914,8 @@
"diffusion_model": "Diffusion Model"
},
"contextMenu": {
"moveToOtherTypeFolder": "移動到 {otherType} 資料夾"
"moveToOtherTypeFolder": "移動到 {otherType} 資料夾",
"sendToWorkflow": "傳送到工作流"
}
},
"embeddings": {
@@ -744,13 +928,23 @@
"unpinSidebar": "取消固定側邊欄",
"switchToListView": "切換至列表檢視",
"switchToTreeView": "切換到樹狀檢視",
"recursiveOn": "搜尋子資料夾",
"recursiveOff": "僅搜尋目前資料夾",
"recursiveOn": "包含子資料夾",
"recursiveOff": "僅目前資料夾",
"recursiveUnavailable": "遞迴搜尋僅能在樹狀檢視中使用",
"collapseAllDisabled": "列表檢視下不可用",
"dragDrop": {
"unableToResolveRoot": "無法確定移動的目標路徑。",
"moveUnsupported": "Move is not supported for this item."
"moveUnsupported": "Move is not supported for this item.",
"createFolderHint": "放開以建立新資料夾",
"newFolderName": "新資料夾名稱",
"folderNameHint": "按 Enter 確認Escape 取消",
"emptyFolderName": "請輸入資料夾名稱",
"invalidFolderName": "資料夾名稱包含無效字元",
"noDragState": "未找到待處理的拖放操作"
},
"empty": {
"noFolders": "未找到資料夾",
"dragHint": "將項目拖到此處以建立資料夾"
}
},
"statistics": {
@@ -815,6 +1009,8 @@
"earlyAccess": "早期存取",
"earlyAccessTooltip": "需要早期存取",
"inLibrary": "已在庫存",
"downloaded": "已下載",
"downloadedTooltip": "先前已下載,但目前不在你的庫中。",
"alreadyInLibrary": "已在庫存",
"autoOrganizedPath": "[依路徑範本自動整理]",
"errors": {
@@ -905,6 +1101,14 @@
"save": "更新基礎模型",
"cancel": "取消"
},
"bulkDownloadMissingLoras": {
"title": "下載缺失的 LoRAs",
"message": "發現 {uniqueCount} 個獨特的缺失 LoRAs從選取食譜中的 {totalCount} 個總數)。",
"previewTitle": "要下載的 LoRAs",
"moreItems": "...還有 {count} 個",
"note": "檔案將使用預設路徑模板下載。根據 LoRAs 的數量,這可能需要一些時間。",
"downloadButton": "下載 {count} 個 LoRA(s)"
},
"exampleAccess": {
"title": "本機範例圖片",
"message": "此模型未找到本機範例圖片。可選擇:",
@@ -956,7 +1160,9 @@
"viewOnCivitai": "在 Civitai 查看",
"viewOnCivitaiText": "在 Civitai 查看",
"viewCreatorProfile": "查看創作者個人檔案",
"openFileLocation": "開啟檔案位置"
"openFileLocation": "開啟檔案位置",
"sendToWorkflow": "傳送到 ComfyUI",
"sendToWorkflowText": "傳送到 ComfyUI"
},
"openFileLocation": {
"success": "檔案位置已成功開啟",
@@ -964,6 +1170,9 @@
"copied": "路徑已複製到剪貼簿:{{path}}",
"clipboardFallback": "路徑:{{path}}"
},
"sendToWorkflow": {
"noFilePath": "無法傳送到 ComfyUI沒有可用的檔案路徑"
},
"metadata": {
"version": "版本",
"fileName": "檔案名稱",
@@ -1000,6 +1209,8 @@
"cancel": "取消編輯",
"save": "儲存變更",
"addPlaceholder": "輸入或點擊下方建議",
"editWord": "編輯觸發詞",
"editPlaceholder": "編輯觸發詞",
"copyWord": "複製觸發詞",
"deleteWord": "刪除觸發詞",
"suggestions": {
@@ -1071,17 +1282,33 @@
"days": "{count}天後"
},
"badges": {
"current": "目前版本",
"current": "已開啟版本",
"currentTooltip": "這是你用來開啟此彈窗的版本",
"inLibrary": "已在庫中",
"inLibraryTooltip": "此版本已存在於你的本地庫中",
"downloaded": "已下載",
"downloadedTooltip": "此版本之前下載過,但目前不在你的本地庫中",
"newer": "較新版本",
"newerTooltip": "此版本比你本地的最新版本更新",
"earlyAccess": "搶先體驗",
"ignored": "已忽略"
"earlyAccessTooltip": "此版本目前需要 Civitai 搶先體驗權限",
"ignored": "已忽略",
"ignoredTooltip": "此版本已關閉更新通知",
"onSiteOnly": "僅站內生成",
"onSiteOnlyTooltip": "此版本僅在 Civitai 站內可用,無法下載"
},
"actions": {
"download": "下載",
"downloadTooltip": "下載此版本",
"downloadEarlyAccessTooltip": "從 Civitai 下載此搶先體驗版本",
"downloadNotAllowedTooltip": "此版本僅在 Civitai 站內可用,無法下載",
"delete": "刪除",
"deleteTooltip": "刪除此本地版本",
"ignore": "忽略",
"unignore": "取消忽略",
"ignoreTooltip": "忽略此版本的更新通知",
"unignoreTooltip": "恢復此版本的更新通知",
"viewVersionOnCivitai": "在 Civitai 上查看版本",
"earlyAccessTooltip": "需要購買搶先體驗",
"resumeModelUpdates": "恢復追蹤此模型的更新",
"ignoreModelUpdates": "忽略此模型的更新",
@@ -1221,7 +1448,9 @@
"recipeReplaced": "配方已取代於工作流",
"recipeFailedToSend": "傳送配方到工作流失敗",
"noMatchingNodes": "目前工作流程中沒有相容的節點",
"noTargetNodeSelected": "未選擇目標節點"
"noTargetNodeSelected": "未選擇目標節點",
"modelUpdated": "模型已更新到工作流",
"modelFailed": "更新模型節點失敗"
},
"nodeSelector": {
"recipe": "配方",
@@ -1235,6 +1464,10 @@
"opened": "範例圖片資料夾已開啟",
"openingFolder": "正在開啟範例圖片資料夾",
"failedToOpen": "開啟範例圖片資料夾失敗",
"copiedPath": "路徑已複製到剪貼簿:{{path}}",
"clipboardFallback": "路徑:{{path}}",
"copiedUri": "連結已複製到剪貼簿:{{uri}}",
"uriClipboardFallback": "連結:{{uri}}",
"setupRequired": "範例圖片儲存",
"setupDescription": "要新增自訂範例圖片,您需要先設定下載位置。",
"setupUsage": "此路徑用於儲存下載的範例圖片和自訂圖片。",
@@ -1342,7 +1575,14 @@
"showWechatQR": "顯示微信二維碼",
"hideWechatQR": "隱藏微信二維碼"
},
"footer": "感謝您使用 LoRA 管理器!❤️"
"footer": "感謝您使用 LoRA 管理器!❤️",
"supporters": {
"title": "感謝所有支持者",
"subtitle": "感謝 {count} 位支持者讓這個專案成為可能",
"specialThanks": "特別感謝",
"allSupporters": "所有支持者",
"totalCount": "共 {count} 位支持者"
}
},
"toast": {
"general": {
@@ -1365,6 +1605,7 @@
"pleaseSelectVersion": "請選擇一個版本",
"versionExists": "此版本已存在於您的庫中",
"downloadCompleted": "下載成功完成",
"downloadSkippedByBaseModel": "由於基礎模型 {baseModel} 已被排除,已跳過下載",
"autoOrganizeSuccess": "自動整理已成功完成,共 {count} 個 {type} 已整理",
"autoOrganizePartialSuccess": "自動整理完成:已移動 {success} 個,{failures} 個失敗,共 {total} 個模型",
"autoOrganizeFailed": "自動整理失敗:{error}",
@@ -1376,13 +1617,19 @@
"loadFailed": "載入 {modelType} 失敗:{message}",
"refreshComplete": "刷新完成",
"refreshFailed": "刷新配方失敗:{message}",
"syncComplete": "同步完成",
"syncFailed": "同步配方失敗:{message}",
"updateFailed": "更新配方失敗:{error}",
"updateError": "更新配方錯誤:{message}",
"nameSaved": "配方「{name}」已成功儲存",
"nameUpdated": "配方名稱已更新",
"tagsUpdated": "配方標籤已更新",
"sourceUrlUpdated": "來源網址已更新",
"promptUpdated": "提示詞更新成功",
"negativePromptUpdated": "負面提示詞更新成功",
"promptEditorHint": "按 Enter 儲存Shift+Enter 換行",
"noRecipeId": "無配方 ID",
"sendToWorkflowFailed": "傳送配方到工作流失敗:{message}",
"copyFailed": "複製配方語法錯誤:{message}",
"noMissingLoras": "無缺少的 LoRA 可下載",
"missingLorasInfoFailed": "取得缺少 LoRA 資訊失敗",
@@ -1410,9 +1657,20 @@
"processingError": "處理錯誤:{message}",
"folderBrowserError": "載入資料夾瀏覽器錯誤:{message}",
"recipeSaveFailed": "儲存配方失敗:{error}",
"recipeSaved": "配方儲存成功",
"importFailed": "匯入失敗:{message}",
"folderTreeFailed": "載入資料夾樹狀結構失敗",
"folderTreeError": "載入資料夾樹狀結構錯誤"
"folderTreeError": "載入資料夾樹狀結構錯誤",
"batchImportFailed": "啟動批量匯入失敗:{message}",
"batchImportCancelling": "正在取消批量匯入...",
"batchImportCancelFailed": "取消批量匯入失敗:{message}",
"batchImportNoUrls": "請輸入至少一個 URL 或檔案路徑",
"batchImportNoDirectory": "請輸入目錄路徑",
"batchImportBrowseFailed": "瀏覽目錄失敗:{message}",
"batchImportDirectorySelected": "已選擇目錄:{path}",
"noRecipesSelected": "未選取任何食譜",
"noMissingLorasInSelection": "在選取的食譜中未找到缺失的 LoRAs",
"noLoraRootConfigured": "未配置 LoRA 根目錄。請在設定中設定預設的 LoRA 根目錄。"
},
"models": {
"noModelsSelected": "未選擇模型",
@@ -1479,6 +1737,8 @@
"mappingSaveFailed": "儲存基礎模型對應失敗:{message}",
"downloadTemplatesUpdated": "下載路徑範本已更新",
"downloadTemplatesFailed": "儲存下載路徑範本失敗:{message}",
"recipesPathUpdated": "配方儲存路徑已更新",
"recipesPathSaveFailed": "更新配方儲存路徑失敗:{message}",
"settingsUpdated": "設定已更新:{setting}",
"compactModeToggled": "緊湊模式已{state}",
"settingSaveFailed": "儲存設定失敗:{message}",
@@ -1529,8 +1789,8 @@
},
"triggerWords": {
"loadFailed": "無法載入訓練詞",
"tooLong": "觸發詞不可超過 100 個字",
"tooMany": "最多允許 30 個觸發詞",
"tooLong": "觸發詞不可超過 500 個字",
"tooMany": "最多允許 100 個觸發詞",
"alreadyExists": "此觸發詞已存在",
"updateSuccess": "觸發詞已更新",
"updateFailed": "更新觸發詞失敗",
@@ -1591,6 +1851,8 @@
"deleteFailed": "刪除 {type} 失敗:{message}",
"excludeSuccess": "{type} 已成功排除",
"excludeFailed": "排除 {type} 失敗:{message}",
"restoreSuccess": "{type} 已成功還原",
"restoreFailed": "還原 {type} 失敗:{message}",
"fileNameUpdated": "檔案名稱已成功更新",
"fileRenameFailed": "重新命名檔案失敗:{error}",
"previewUpdated": "預覽圖片已成功更新",
@@ -1622,6 +1884,37 @@
"moveFailed": "Failed to move item: {message}"
}
},
"doctor": {
"kicker": "系統診斷",
"title": "醫生",
"buttonTitle": "執行診斷與常見修復",
"loading": "正在檢查環境...",
"footer": "如果修復後問題仍然存在,請匯出診斷套件。",
"summary": {
"idle": "針對設定、快取完整性與 UI 一致性執行健康檢查。",
"ok": "目前環境中未發現任何活動中的問題。",
"warning": "找到 {count} 個問題。大多可以直接在此面板修復。",
"error": "應先處理 {count} 個問題,應用程式才能完全正常。"
},
"status": {
"ok": "健康",
"warning": "需要注意",
"error": "需要處理"
},
"actions": {
"runAgain": "重新執行",
"exportBundle": "匯出套件"
},
"toast": {
"loadFailed": "載入診斷失敗:{message}",
"repairSuccess": "快取重建完成。",
"repairFailed": "快取重建失敗:{message}",
"exportSuccess": "診斷套件已匯出。",
"exportFailed": "匯出診斷套件失敗:{message}",
"conflictsResolved": "已解決 {count} 個檔案名稱衝突。",
"conflictsResolveFailed": "解決檔案名稱衝突失敗:{message}"
}
},
"banners": {
"versionMismatch": {
"title": "偵測到應用程式更新",

3
package-lock.json generated
View File

@@ -114,7 +114,6 @@
}
],
"license": "MIT",
"peer": true,
"engines": {
"node": ">=18"
},
@@ -138,7 +137,6 @@
}
],
"license": "MIT",
"peer": true,
"engines": {
"node": ">=18"
}
@@ -1613,7 +1611,6 @@
"integrity": "sha512-MyL55p3Ut3cXbeBEG7Hcv0mVM8pp8PBNWxRqchZnSfAiES1v1mRnMeFfaHWIPULpwsYfvO+ZmMZz5tGCnjzDUQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"cssstyle": "^4.0.1",
"data-urls": "^5.0.0",

View File

@@ -1,8 +1,9 @@
import os
import platform
import posixpath
import threading
from pathlib import Path
import folder_paths # type: ignore
import folder_paths # type: ignore
from typing import Any, Dict, Iterable, List, Mapping, Optional, Set, Tuple
import logging
import json
@@ -10,16 +11,84 @@ import urllib.parse
import time
from .utils.cache_paths import CacheType, get_cache_file_path, get_legacy_cache_paths
from .utils.settings_paths import ensure_settings_file, get_settings_dir, load_settings_template
from .utils.settings_paths import (
ensure_settings_file,
get_settings_dir,
load_settings_template,
)
# Use an environment variable to control standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = (
os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"
or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
)
logger = logging.getLogger(__name__)
def _normalize_root_identity(path: str) -> str:
"""Normalize a root path for comparisons across slash styles."""
normalized = posixpath.normpath(path.strip().replace("\\", "/"))
if len(normalized) >= 2 and normalized[1] == ":":
return normalized.lower()
return normalized
def _resolve_valid_default_root(
current: str, primary_paths: List[str], allowed_paths: List[str], name: str
) -> str:
"""Return a valid default root from the current primary/extra path set."""
valid_paths = [path for path in primary_paths if isinstance(path, str) and path.strip()]
fallback_paths: List[str] = []
seen: Set[str] = set()
for path in allowed_paths:
if not isinstance(path, str):
continue
stripped = path.strip()
if not stripped:
continue
identity = _normalize_root_identity(stripped)
if identity in seen:
continue
seen.add(identity)
fallback_paths.append(stripped)
allowed = {_normalize_root_identity(path) for path in fallback_paths}
if current and _normalize_root_identity(current) in allowed:
return current
if not valid_paths:
if not fallback_paths:
return ""
if current:
logger.info(
"Repaired stale %s from '%s' to '%s' because it is not present in primary or extra roots",
name,
current,
fallback_paths[0],
)
else:
logger.info("Auto-setting %s to '%s'", name, fallback_paths[0])
return fallback_paths[0]
if current:
logger.info(
"Repaired stale %s from '%s' to '%s' because it is not present in primary or extra roots",
name,
current,
valid_paths[0],
)
else:
logger.info("Auto-setting %s to '%s'", name, valid_paths[0])
return valid_paths[0]
def _normalize_folder_paths_for_comparison(
folder_paths: Mapping[str, Iterable[str]]
folder_paths: Mapping[str, Iterable[str]],
) -> Dict[str, Set[str]]:
"""Normalize folder paths for comparison across libraries."""
@@ -49,7 +118,7 @@ def _normalize_folder_paths_for_comparison(
def _normalize_library_folder_paths(
library_payload: Mapping[str, Any]
library_payload: Mapping[str, Any],
) -> Dict[str, Set[str]]:
"""Return normalized folder paths extracted from a library payload."""
@@ -76,9 +145,15 @@ class Config:
"""Global configuration for LoRA Manager"""
def __init__(self):
self.templates_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'templates')
self.static_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'static')
self.i18n_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'locales')
self.templates_path = os.path.join(
os.path.dirname(os.path.dirname(__file__)), "templates"
)
self.static_path = os.path.join(
os.path.dirname(os.path.dirname(__file__)), "static"
)
self.i18n_path = os.path.join(
os.path.dirname(os.path.dirname(__file__)), "locales"
)
# Path mapping dictionary, target to link mapping
self._path_mappings: Dict[str, str] = {}
# Normalized preview root directories used to validate preview access
@@ -96,6 +171,7 @@ class Config:
self.extra_checkpoints_roots: List[str] = []
self.extra_unet_roots: List[str] = []
self.extra_embeddings_roots: List[str] = []
self.recipes_path: str = ""
# Scan symbolic links during initialization
self._initialize_symlink_mappings()
@@ -152,17 +228,21 @@ class Config:
default_library = libraries.get("default", {})
target_folder_paths = {
'loras': list(self.loras_roots),
'checkpoints': list(self.checkpoints_roots or []),
'unet': list(self.unet_roots or []),
'embeddings': list(self.embeddings_roots or []),
"loras": list(self.loras_roots),
"checkpoints": list(self.checkpoints_roots or []),
"unet": list(self.unet_roots or []),
"embeddings": list(self.embeddings_roots or []),
}
normalized_target_paths = _normalize_folder_paths_for_comparison(target_folder_paths)
normalized_target_paths = _normalize_folder_paths_for_comparison(
target_folder_paths
)
normalized_default_paths: Optional[Dict[str, Set[str]]] = None
if isinstance(default_library, Mapping):
normalized_default_paths = _normalize_library_folder_paths(default_library)
normalized_default_paths = _normalize_library_folder_paths(
default_library
)
if (
not comfy_library
@@ -180,47 +260,89 @@ class Config:
"Failed to rename legacy 'default' library: %s", rename_error
)
default_lora_root = comfy_library.get("default_lora_root", "")
if not default_lora_root and len(self.loras_roots) == 1:
default_lora_root = self.loras_roots[0]
default_lora_root = _resolve_valid_default_root(
comfy_library.get("default_lora_root", ""),
list(self.loras_roots or []),
list(self.loras_roots or [])
+ list(comfy_library.get("extra_folder_paths", {}).get("loras", []) or []),
"default_lora_root",
)
default_checkpoint_root = comfy_library.get("default_checkpoint_root", "")
if (not default_checkpoint_root and self.checkpoints_roots and
len(self.checkpoints_roots) == 1):
default_checkpoint_root = self.checkpoints_roots[0]
default_checkpoint_root = _resolve_valid_default_root(
comfy_library.get("default_checkpoint_root", ""),
list(self.checkpoints_roots or []),
list(self.checkpoints_roots or [])
+ list(comfy_library.get("extra_folder_paths", {}).get("checkpoints", []) or []),
"default_checkpoint_root",
)
default_embedding_root = comfy_library.get("default_embedding_root", "")
if (not default_embedding_root and self.embeddings_roots and
len(self.embeddings_roots) == 1):
default_embedding_root = self.embeddings_roots[0]
default_embedding_root = _resolve_valid_default_root(
comfy_library.get("default_embedding_root", ""),
list(self.embeddings_roots or []),
list(self.embeddings_roots or [])
+ list(comfy_library.get("extra_folder_paths", {}).get("embeddings", []) or []),
"default_embedding_root",
)
metadata = dict(comfy_library.get("metadata", {}))
metadata.setdefault("display_name", "ComfyUI")
metadata["source"] = "comfyui"
extra_folder_paths = {}
if isinstance(comfy_library, Mapping):
existing_extra_paths = comfy_library.get("extra_folder_paths", {})
if isinstance(existing_extra_paths, Mapping):
extra_folder_paths = {
key: list(value) if isinstance(value, list) else []
for key, value in existing_extra_paths.items()
}
active_library_name = settings_service.get_active_library_name()
should_activate = (
active_library_name == "comfyui"
or self._should_activate_comfy_library(libraries, libraries_changed)
)
settings_service.upsert_library(
"comfyui",
folder_paths=target_folder_paths,
extra_folder_paths=extra_folder_paths,
default_lora_root=default_lora_root,
default_checkpoint_root=default_checkpoint_root,
default_embedding_root=default_embedding_root,
metadata=metadata,
activate=True,
activate=should_activate,
)
logger.info("Updated 'comfyui' library with current folder paths")
if should_activate:
logger.info("Updated 'comfyui' library with current folder paths")
else:
logger.info(
"Updated 'comfyui' library with current folder paths without activating it"
)
except Exception as e:
logger.warning(f"Failed to save folder paths: {e}")
def _should_activate_comfy_library(
self, libraries: Mapping[str, Any], libraries_changed: bool
) -> bool:
"""Return whether startup sync should make the ComfyUI library active."""
if libraries_changed:
return True
if not libraries:
return True
return "comfyui" in libraries and len(libraries) == 1
def _is_link(self, path: str) -> bool:
try:
if os.path.islink(path):
return True
if platform.system() == 'Windows':
if platform.system() == "Windows":
try:
import ctypes
FILE_ATTRIBUTE_REPARSE_POINT = 0x400
attrs = ctypes.windll.kernel32.GetFileAttributesW(str(path))
attrs = ctypes.windll.kernel32.GetFileAttributesW(str(path)) # type: ignore[attr-defined]
return attrs != -1 and (attrs & FILE_ATTRIBUTE_REPARSE_POINT)
except Exception as e:
logger.error(f"Error checking Windows reparse point: {e}")
@@ -233,18 +355,19 @@ class Config:
"""Check if a directory entry is a symlink, including Windows junctions."""
if entry.is_symlink():
return True
if platform.system() == 'Windows':
if platform.system() == "Windows":
try:
import ctypes
FILE_ATTRIBUTE_REPARSE_POINT = 0x400
attrs = ctypes.windll.kernel32.GetFileAttributesW(entry.path)
attrs = ctypes.windll.kernel32.GetFileAttributesW(entry.path) # type: ignore[attr-defined]
return attrs != -1 and (attrs & FILE_ATTRIBUTE_REPARSE_POINT)
except Exception:
pass
return False
def _normalize_path(self, path: str) -> str:
return os.path.normpath(path).replace(os.sep, '/')
return os.path.normpath(path).replace(os.sep, "/")
def _get_symlink_cache_path(self) -> Path:
canonical_path = get_cache_file_path(CacheType.SYMLINK, create_dir=True)
@@ -278,19 +401,18 @@ class Config:
if self._entry_is_symlink(entry):
try:
target = os.path.realpath(entry.path)
direct_symlinks.append([
self._normalize_path(entry.path),
self._normalize_path(target)
])
direct_symlinks.append(
[
self._normalize_path(entry.path),
self._normalize_path(target),
]
)
except OSError:
pass
except (OSError, PermissionError):
pass
return {
"roots": unique_roots,
"direct_symlinks": sorted(direct_symlinks)
}
return {"roots": unique_roots, "direct_symlinks": sorted(direct_symlinks)}
def _initialize_symlink_mappings(self) -> None:
start = time.perf_counter()
@@ -307,10 +429,14 @@ class Config:
cached_fingerprint = self._cached_fingerprint
# Check 1: First-level symlinks unchanged (catches new symlinks at root)
fingerprint_valid = cached_fingerprint and current_fingerprint == cached_fingerprint
fingerprint_valid = (
cached_fingerprint and current_fingerprint == cached_fingerprint
)
# Check 2: All cached mappings still valid (catches changes at any depth)
mappings_valid = self._validate_cached_mappings() if fingerprint_valid else False
mappings_valid = (
self._validate_cached_mappings() if fingerprint_valid else False
)
if fingerprint_valid and mappings_valid:
return
@@ -370,7 +496,9 @@ class Config:
for target, link in cached_mappings.items():
if not isinstance(target, str) or not isinstance(link, str):
continue
normalized_mappings[self._normalize_path(target)] = self._normalize_path(link)
normalized_mappings[self._normalize_path(target)] = self._normalize_path(
link
)
self._path_mappings = normalized_mappings
@@ -391,7 +519,9 @@ class Config:
parent_dir = loaded_path.parent
if parent_dir.name == "cache" and not any(parent_dir.iterdir()):
parent_dir.rmdir()
logger.info("Removed empty legacy cache directory: %s", parent_dir)
logger.info(
"Removed empty legacy cache directory: %s", parent_dir
)
except Exception:
pass
@@ -402,7 +532,9 @@ class Config:
exc,
)
else:
logger.info("Symlink cache loaded with %d mappings", len(self._path_mappings))
logger.info(
"Symlink cache loaded with %d mappings", len(self._path_mappings)
)
return True
@@ -414,7 +546,7 @@ class Config:
"""
for target, link in self._path_mappings.items():
# Convert normalized paths back to OS paths
link_path = link.replace('/', os.sep)
link_path = link.replace("/", os.sep)
# Check if symlink still exists
if not self._is_link(link_path):
@@ -427,7 +559,9 @@ class Config:
if actual_target != target:
logger.debug(
"Symlink target changed: %s -> %s (cached: %s)",
link_path, actual_target, target
link_path,
actual_target,
target,
)
return False
except OSError:
@@ -446,7 +580,11 @@ class Config:
try:
with cache_path.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, ensure_ascii=False, indent=2)
logger.debug("Symlink cache saved to %s with %d mappings", cache_path, len(self._path_mappings))
logger.debug(
"Symlink cache saved to %s with %d mappings",
cache_path,
len(self._path_mappings),
)
except Exception as exc:
logger.info("Failed to write symlink cache %s: %s", cache_path, exc)
@@ -494,13 +632,13 @@ class Config:
self.add_path_mapping(entry.path, target_path)
except Exception as inner_exc:
logger.debug(
"Error processing directory entry %s: %s", entry.path, inner_exc
"Error processing directory entry %s: %s",
entry.path,
inner_exc,
)
except Exception as e:
logger.error(f"Error scanning links in {root}: {e}")
def add_path_mapping(self, link_path: str, target_path: str):
"""Add a symbolic link path mapping
target_path: actual target path
@@ -589,31 +727,38 @@ class Config:
preview_roots.update(self._expand_preview_root(root))
for root in self.extra_embeddings_roots or []:
preview_roots.update(self._expand_preview_root(root))
if self.recipes_path:
preview_roots.update(self._expand_preview_root(self.recipes_path))
for target, link in self._path_mappings.items():
preview_roots.update(self._expand_preview_root(target))
preview_roots.update(self._expand_preview_root(link))
self._preview_root_paths = {path for path in preview_roots if path.is_absolute()}
self._preview_root_paths = {
path for path in preview_roots if path.is_absolute()
}
logger.debug(
"Preview roots rebuilt: %d paths from %d lora roots (%d extra), %d checkpoint roots (%d extra), %d embedding roots (%d extra), %d symlink mappings",
len(self._preview_root_paths),
len(self.loras_roots or []), len(self.extra_loras_roots or []),
len(self.base_models_roots or []), len(self.extra_checkpoints_roots or []),
len(self.embeddings_roots or []), len(self.extra_embeddings_roots or []),
len(self.loras_roots or []),
len(self.extra_loras_roots or []),
len(self.base_models_roots or []),
len(self.extra_checkpoints_roots or []),
len(self.embeddings_roots or []),
len(self.extra_embeddings_roots or []),
len(self._path_mappings),
)
def map_path_to_link(self, path: str) -> str:
"""Map a target path back to its symbolic link path"""
normalized_path = os.path.normpath(path).replace(os.sep, '/')
normalized_path = os.path.normpath(path).replace(os.sep, "/")
# Check if the path is contained in any mapped target path
for target_path, link_path in self._path_mappings.items():
# Match whole path components to avoid prefix collisions (e.g., /a/b vs /a/bc)
if normalized_path == target_path:
return link_path
if normalized_path.startswith(target_path + '/'):
if normalized_path.startswith(target_path + "/"):
# If the path starts with the target path, replace with link path
mapped_path = normalized_path.replace(target_path, link_path, 1)
return mapped_path
@@ -621,14 +766,14 @@ class Config:
def map_link_to_path(self, link_path: str) -> str:
"""Map a symbolic link path back to the actual path"""
normalized_link = os.path.normpath(link_path).replace(os.sep, '/')
normalized_link = os.path.normpath(link_path).replace(os.sep, "/")
# Check if the path is contained in any mapped target path
for target_path, link_path_mapped in self._path_mappings.items():
# Match whole path components
if normalized_link == link_path_mapped:
return target_path
if normalized_link.startswith(link_path_mapped + '/'):
if normalized_link.startswith(link_path_mapped + "/"):
# If the path starts with the link path, replace with actual path
mapped_path = normalized_link.replace(link_path_mapped, target_path, 1)
return mapped_path
@@ -641,8 +786,8 @@ class Config:
continue
if not os.path.exists(path):
continue
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, '/')
normalized = os.path.normpath(path).replace(os.sep, '/')
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, "/")
normalized = os.path.normpath(path).replace(os.sep, "/")
if real_path not in dedup:
dedup[real_path] = normalized
return dedup
@@ -652,15 +797,139 @@ class Config:
unique_paths = sorted(path_map.values(), key=lambda p: p.lower())
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
real_path = os.path.normpath(os.path.realpath(original_path)).replace(
os.sep, "/"
)
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
return unique_paths
@staticmethod
def _normalize_path_for_comparison(
path: str, *, resolve_realpath: bool = False
) -> str:
"""Normalize a path for equality checks across platforms."""
candidate = os.path.realpath(path) if resolve_realpath else path
return os.path.normcase(os.path.normpath(candidate)).replace(os.sep, "/")
def _filter_overlapping_extra_lora_paths(
self,
primary_paths: Iterable[str],
extra_paths: Iterable[str],
) -> List[str]:
"""Drop extra LoRA paths that resolve to the same physical location as primary roots."""
primary_map = {
self._normalize_path_for_comparison(path, resolve_realpath=True): path
for path in primary_paths
if isinstance(path, str) and path.strip() and os.path.exists(path)
}
primary_symlink_map = self._collect_first_level_symlink_targets(primary_paths)
filtered: List[str] = []
for original_path in extra_paths:
if not isinstance(original_path, str):
continue
stripped = original_path.strip()
if not stripped:
continue
if not os.path.exists(stripped):
continue
real_path = self._normalize_path_for_comparison(
stripped,
resolve_realpath=True,
)
normalized_path = os.path.normpath(stripped).replace(os.sep, "/")
primary_path = primary_map.get(real_path)
if primary_path:
# Config loading should stay tolerant of existing invalid state and warn.
logger.warning(
"Detected the same LoRA folder in both ComfyUI model paths and "
"LoRA Manager Extra Folder Paths. This can cause duplicate items or "
"other unexpected behavior, and it usually means the path setup is "
"not doing what you intended. LoRA Manager will keep the ComfyUI "
"path and ignore this Extra Folder Paths entry: '%s'. Please review "
"your path settings and remove the duplicate entry.",
normalized_path,
)
continue
symlink_path = primary_symlink_map.get(real_path)
if symlink_path:
# Config loading should stay tolerant of existing invalid state and warn.
logger.warning(
"Detected the same LoRA folder in both ComfyUI model paths and "
"LoRA Manager Extra Folder Paths. This can cause duplicate items or "
"other unexpected behavior, and it usually means the path setup is "
"not doing what you intended. LoRA Manager will keep the ComfyUI "
"path and ignore this Extra Folder Paths entry: '%s'. Please review "
"your path settings and remove the duplicate entry.",
normalized_path,
)
continue
filtered.append(stripped)
return filtered
def _collect_first_level_symlink_targets(
self, roots: Iterable[str]
) -> Dict[str, str]:
"""Return real-path -> link-path mappings for first-level symlinks under the given roots."""
targets: Dict[str, str] = {}
for root in roots:
if not isinstance(root, str):
continue
stripped_root = root.strip()
if not stripped_root or not os.path.isdir(stripped_root):
continue
try:
with os.scandir(stripped_root) as iterator:
for entry in iterator:
try:
if not self._entry_is_symlink(entry):
continue
target_path = os.path.realpath(entry.path)
if not os.path.isdir(target_path):
continue
normalized_target = self._normalize_path_for_comparison(
target_path,
resolve_realpath=True,
)
normalized_link = os.path.normpath(entry.path).replace(
os.sep, "/"
)
targets.setdefault(normalized_target, normalized_link)
except Exception as inner_exc:
logger.debug(
"Error collecting LoRA symlink target for %s: %s",
entry.path,
inner_exc,
)
except Exception as exc:
logger.debug(
"Error scanning first-level LoRA symlinks in %s: %s",
stripped_root,
exc,
)
return targets
def _prepare_checkpoint_paths(
self, checkpoint_paths: Iterable[str], unet_paths: Iterable[str]
) -> List[str]:
) -> Tuple[List[str], List[str], List[str]]:
"""Prepare checkpoint paths and return (all_roots, checkpoint_roots, unet_roots).
Returns:
Tuple of (all_unique_paths, checkpoint_only_paths, unet_only_paths)
This method does NOT modify instance variables - callers must set them.
"""
checkpoint_map = self._dedupe_existing_paths(checkpoint_paths)
unet_map = self._dedupe_existing_paths(unet_paths)
@@ -674,7 +943,7 @@ class Config:
"Please fix your ComfyUI path configuration to separate these folders. "
"Falling back to 'checkpoints' for backward compatibility. "
"Overlapping real paths: %s",
[checkpoint_map.get(rp, rp) for rp in overlapping_real_paths]
[checkpoint_map.get(rp, rp) for rp in overlapping_real_paths],
)
# Remove overlapping paths from unet_map to prioritize checkpoints
for rp in overlapping_real_paths:
@@ -690,22 +959,26 @@ class Config:
checkpoint_values = set(checkpoint_map.values())
unet_values = set(unet_map.values())
self.checkpoints_roots = [p for p in unique_paths if p in checkpoint_values]
self.unet_roots = [p for p in unique_paths if p in unet_values]
checkpoint_roots = [p for p in unique_paths if p in checkpoint_values]
unet_roots = [p for p in unique_paths if p in unet_values]
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
real_path = os.path.normpath(os.path.realpath(original_path)).replace(
os.sep, "/"
)
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
return unique_paths
return unique_paths, checkpoint_roots, unet_roots
def _prepare_embedding_paths(self, raw_paths: Iterable[str]) -> List[str]:
path_map = self._dedupe_existing_paths(raw_paths)
unique_paths = sorted(path_map.values(), key=lambda p: p.lower())
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
real_path = os.path.normpath(os.path.realpath(original_path)).replace(
os.sep, "/"
)
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
@@ -715,32 +988,71 @@ class Config:
self,
folder_paths: Mapping[str, Iterable[str]],
extra_folder_paths: Optional[Mapping[str, Iterable[str]]] = None,
recipes_path: str = "",
) -> None:
self._path_mappings.clear()
self._preview_root_paths = set()
self.recipes_path = recipes_path if isinstance(recipes_path, str) else ""
lora_paths = folder_paths.get('loras', []) or []
checkpoint_paths = folder_paths.get('checkpoints', []) or []
unet_paths = folder_paths.get('unet', []) or []
embedding_paths = folder_paths.get('embeddings', []) or []
lora_paths = folder_paths.get("loras", []) or []
checkpoint_paths = folder_paths.get("checkpoints", []) or []
unet_paths = folder_paths.get("unet", []) or []
embedding_paths = folder_paths.get("embeddings", []) or []
self.loras_roots = self._prepare_lora_paths(lora_paths)
self.base_models_roots = self._prepare_checkpoint_paths(checkpoint_paths, unet_paths)
(
self.base_models_roots,
self.checkpoints_roots,
self.unet_roots,
) = self._prepare_checkpoint_paths(checkpoint_paths, unet_paths)
self.embeddings_roots = self._prepare_embedding_paths(embedding_paths)
# Process extra paths (only for LoRA Manager, not shared with ComfyUI)
extra_paths = extra_folder_paths or {}
extra_lora_paths = extra_paths.get('loras', []) or []
extra_checkpoint_paths = extra_paths.get('checkpoints', []) or []
extra_unet_paths = extra_paths.get('unet', []) or []
extra_embedding_paths = extra_paths.get('embeddings', []) or []
extra_lora_paths = extra_paths.get("loras", []) or []
extra_checkpoint_paths = extra_paths.get("checkpoints", []) or []
extra_unet_paths = extra_paths.get("unet", []) or []
extra_embedding_paths = extra_paths.get("embeddings", []) or []
self.extra_loras_roots = self._prepare_lora_paths(extra_lora_paths)
self.extra_checkpoints_roots = self._prepare_checkpoint_paths(extra_checkpoint_paths, extra_unet_paths)
self.extra_embeddings_roots = self._prepare_embedding_paths(extra_embedding_paths)
# extra_unet_roots is set by _prepare_checkpoint_paths (access unet_roots before it's reset)
unet_roots_value: List[str] = getattr(self, 'unet_roots', None) or []
self.extra_unet_roots = unet_roots_value
filtered_extra_lora_paths = self._filter_overlapping_extra_lora_paths(
self.loras_roots,
extra_lora_paths,
)
self.extra_loras_roots = self._prepare_lora_paths(filtered_extra_lora_paths)
(
_,
self.extra_checkpoints_roots,
self.extra_unet_roots,
) = self._prepare_checkpoint_paths(extra_checkpoint_paths, extra_unet_paths)
self.extra_embeddings_roots = self._prepare_embedding_paths(
extra_embedding_paths
)
# Log extra folder paths
if self.extra_loras_roots:
logger.info(
"Found extra LoRA roots:"
+ "\n - "
+ "\n - ".join(self.extra_loras_roots)
)
if self.extra_checkpoints_roots:
logger.info(
"Found extra checkpoint roots:"
+ "\n - "
+ "\n - ".join(self.extra_checkpoints_roots)
)
if self.extra_unet_roots:
logger.info(
"Found extra diffusion model roots:"
+ "\n - "
+ "\n - ".join(self.extra_unet_roots)
)
if self.extra_embeddings_roots:
logger.info(
"Found extra embedding roots:"
+ "\n - "
+ "\n - ".join(self.extra_embeddings_roots)
)
self._initialize_symlink_mappings()
@@ -749,7 +1061,10 @@ class Config:
try:
raw_paths = folder_paths.get_folder_paths("loras")
unique_paths = self._prepare_lora_paths(raw_paths)
logger.info("Found LoRA roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
logger.info(
"Found LoRA roots:"
+ ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]")
)
if not unique_paths:
logger.warning("No valid loras folders found in ComfyUI configuration")
@@ -765,12 +1080,21 @@ class Config:
try:
raw_checkpoint_paths = folder_paths.get_folder_paths("checkpoints")
raw_unet_paths = folder_paths.get_folder_paths("unet")
unique_paths = self._prepare_checkpoint_paths(raw_checkpoint_paths, raw_unet_paths)
(
unique_paths,
self.checkpoints_roots,
self.unet_roots,
) = self._prepare_checkpoint_paths(raw_checkpoint_paths, raw_unet_paths)
logger.info("Found checkpoint roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
logger.info(
"Found checkpoint roots:"
+ ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]")
)
if not unique_paths:
logger.warning("No valid checkpoint folders found in ComfyUI configuration")
logger.warning(
"No valid checkpoint folders found in ComfyUI configuration"
)
return []
return unique_paths
@@ -783,10 +1107,15 @@ class Config:
try:
raw_paths = folder_paths.get_folder_paths("embeddings")
unique_paths = self._prepare_embedding_paths(raw_paths)
logger.info("Found embedding roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
logger.info(
"Found embedding roots:"
+ ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]")
)
if not unique_paths:
logger.warning("No valid embeddings folders found in ComfyUI configuration")
logger.warning(
"No valid embeddings folders found in ComfyUI configuration"
)
return []
return unique_paths
@@ -798,9 +1127,9 @@ class Config:
if not preview_path:
return ""
normalized = os.path.normpath(preview_path).replace(os.sep, '/')
encoded_path = urllib.parse.quote(normalized, safe='')
return f'/api/lm/previews?path={encoded_path}'
normalized = os.path.normpath(preview_path).replace(os.sep, "/")
encoded_path = urllib.parse.quote(normalized, safe="")
return f"/api/lm/previews?path={encoded_path}"
def is_preview_path_allowed(self, preview_path: str) -> bool:
"""Return ``True`` if ``preview_path`` is within an allowed directory.
@@ -875,14 +1204,18 @@ class Config:
normalized_link = self._normalize_path(str(current))
self._path_mappings[normalized_target] = normalized_link
self._preview_root_paths.update(self._expand_preview_root(normalized_target))
self._preview_root_paths.update(self._expand_preview_root(normalized_link))
self._preview_root_paths.update(
self._expand_preview_root(normalized_target)
)
self._preview_root_paths.update(
self._expand_preview_root(normalized_link)
)
logger.debug(
"Discovered deep symlink: %s -> %s (preview path: %s)",
normalized_link,
normalized_target,
preview_path
preview_path,
)
return True
@@ -900,20 +1233,36 @@ class Config:
def apply_library_settings(self, library_config: Mapping[str, object]) -> None:
"""Update runtime paths to match the provided library configuration."""
folder_paths = library_config.get('folder_paths') if isinstance(library_config, Mapping) else {}
extra_folder_paths = library_config.get('extra_folder_paths') if isinstance(library_config, Mapping) else None
folder_paths = (
library_config.get("folder_paths")
if isinstance(library_config, Mapping)
else {}
)
extra_folder_paths = (
library_config.get("extra_folder_paths")
if isinstance(library_config, Mapping)
else None
)
if not isinstance(folder_paths, Mapping):
folder_paths = {}
if not isinstance(extra_folder_paths, Mapping):
extra_folder_paths = None
self._apply_library_paths(folder_paths, extra_folder_paths)
recipes_path = (
str(library_config.get("recipes_path", ""))
if isinstance(library_config, Mapping)
else ""
)
self._apply_library_paths(folder_paths, extra_folder_paths, recipes_path)
logger.info(
"Applied library settings with %d lora roots (%d extra), %d checkpoint roots (%d extra), and %d embedding roots (%d extra)",
len(self.loras_roots or []), len(self.extra_loras_roots or []),
len(self.base_models_roots or []), len(self.extra_checkpoints_roots or []),
len(self.embeddings_roots or []), len(self.extra_embeddings_roots or []),
len(self.loras_roots or []),
len(self.extra_loras_roots or []),
len(self.base_models_roots or []),
len(self.extra_checkpoints_roots or []),
len(self.embeddings_roots or []),
len(self.extra_embeddings_roots or []),
)
def get_library_registry_snapshot(self) -> Dict[str, object]:
@@ -933,5 +1282,6 @@ class Config:
logger.debug("Failed to collect library registry snapshot: %s", exc)
return {"active_library": "", "libraries": {}}
# Global config instance
config = Config()

View File

@@ -5,16 +5,22 @@ import logging
from .utils.logging_config import setup_logging
# Check if we're in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = (
os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"
or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
)
# Only setup logging prefix if not in standalone mode
if not standalone_mode:
setup_logging()
from server import PromptServer # type: ignore
from server import PromptServer # type: ignore
from .config import config
from .services.model_service_factory import ModelServiceFactory, register_default_model_types
from .services.model_service_factory import (
ModelServiceFactory,
register_default_model_types,
)
from .routes.recipe_routes import RecipeRoutes
from .routes.stats_routes import StatsRoutes
from .routes.update_routes import UpdateRoutes
@@ -61,6 +67,7 @@ class _SettingsProxy:
settings = _SettingsProxy()
class LoraManager:
"""Main entry point for LoRA Manager plugin"""
@@ -76,7 +83,8 @@ class LoraManager:
(
idx
for idx, middleware in enumerate(app.middlewares)
if getattr(middleware, "__name__", "") == "block_external_middleware"
if getattr(middleware, "__name__", "")
== "block_external_middleware"
),
None,
)
@@ -84,7 +92,9 @@ class LoraManager:
if block_middleware_index is None:
app.middlewares.append(relax_csp_for_remote_media)
else:
app.middlewares.insert(block_middleware_index, relax_csp_for_remote_media)
app.middlewares.insert(
block_middleware_index, relax_csp_for_remote_media
)
# Increase allowed header sizes so browsers with large localhost cookie
# jars (multiple UIs on 127.0.0.1) don't trip aiohttp's 8KB default
@@ -105,7 +115,7 @@ class LoraManager:
app._handler_args = updated_handler_args
# Configure aiohttp access logger to be less verbose
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
logging.getLogger("aiohttp.access").setLevel(logging.WARNING)
# Add specific suppression for connection reset errors
class ConnectionResetFilter(logging.Filter):
@@ -124,19 +134,23 @@ class LoraManager:
asyncio_logger.addFilter(ConnectionResetFilter())
# Add static route for example images if the path exists in settings
example_images_path = settings.get('example_images_path')
example_images_path = settings.get("example_images_path")
logger.info(f"Example images path: {example_images_path}")
if example_images_path and os.path.exists(example_images_path):
app.router.add_static('/example_images_static', example_images_path)
logger.info(f"Added static route for example images: /example_images_static -> {example_images_path}")
app.router.add_static("/example_images_static", example_images_path)
logger.info(
f"Added static route for example images: /example_images_static -> {example_images_path}"
)
# Add static route for locales JSON files
if os.path.exists(config.i18n_path):
app.router.add_static('/locales', config.i18n_path)
logger.info(f"Added static route for locales: /locales -> {config.i18n_path}")
app.router.add_static("/locales", config.i18n_path)
logger.info(
f"Added static route for locales: /locales -> {config.i18n_path}"
)
# Add static route for plugin assets
app.router.add_static('/loras_static', config.static_path)
app.router.add_static("/loras_static", config.static_path)
# Register default model types with the factory
register_default_model_types()
@@ -154,9 +168,11 @@ class LoraManager:
PreviewRoutes.setup_routes(app)
# Setup WebSocket routes that are shared across all model types
app.router.add_get('/ws/fetch-progress', ws_manager.handle_connection)
app.router.add_get('/ws/download-progress', ws_manager.handle_download_connection)
app.router.add_get('/ws/init-progress', ws_manager.handle_init_connection)
app.router.add_get("/ws/fetch-progress", ws_manager.handle_connection)
app.router.add_get(
"/ws/download-progress", ws_manager.handle_download_connection
)
app.router.add_get("/ws/init-progress", ws_manager.handle_init_connection)
# Schedule service initialization
app.on_startup.append(lambda app: cls._initialize_services())
@@ -168,13 +184,48 @@ class LoraManager:
async def _initialize_services(cls):
"""Initialize all services using the ServiceRegistry"""
try:
# Apply library settings to load extra folder paths before scanning
# Only apply if extra paths haven't been loaded yet (preserves test mocks)
try:
from .services.settings_manager import get_settings_manager
settings_manager = get_settings_manager()
library_name = settings_manager.get_active_library_name()
libraries = settings_manager.get_libraries()
if library_name and library_name in libraries:
library_config = libraries[library_name]
# Only apply settings if extra paths are not already configured
# This preserves values set by tests via monkeypatch
extra_paths = library_config.get("extra_folder_paths", {})
has_extra_paths = (
config.extra_loras_roots
or config.extra_checkpoints_roots
or config.extra_unet_roots
or config.extra_embeddings_roots
)
if not has_extra_paths and any(extra_paths.values()):
config.apply_library_settings(library_config)
logger.info(
"Applied library settings for '%s' with extra paths: loras=%s, checkpoints=%s, embeddings=%s",
library_name,
extra_paths.get("loras", []),
extra_paths.get("checkpoints", []),
extra_paths.get("embeddings", []),
)
except Exception as exc:
logger.warning(
"Failed to apply library settings during initialization: %s", exc
)
# Initialize CivitaiClient first to ensure it's ready for other services
await ServiceRegistry.get_civitai_client()
# Register DownloadManager with ServiceRegistry
await ServiceRegistry.get_download_manager()
await ServiceRegistry.get_backup_service()
from .services.metadata_service import initialize_metadata_providers
await initialize_metadata_providers()
# Initialize WebSocket manager
@@ -190,39 +241,58 @@ class LoraManager:
# Create low-priority initialization tasks
init_tasks = [
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init'),
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init'),
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init'),
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
asyncio.create_task(
lora_scanner.initialize_in_background(), name="lora_cache_init"
),
asyncio.create_task(
checkpoint_scanner.initialize_in_background(),
name="checkpoint_cache_init",
),
asyncio.create_task(
embedding_scanner.initialize_in_background(),
name="embedding_cache_init",
),
asyncio.create_task(
recipe_scanner.initialize_in_background(), name="recipe_cache_init"
),
]
await ExampleImagesMigration.check_and_run_migrations()
# Schedule post-initialization tasks to run after scanners complete
asyncio.create_task(
cls._run_post_initialization_tasks(init_tasks),
name='post_init_tasks'
cls._run_post_initialization_tasks(init_tasks), name="post_init_tasks"
)
logger.debug("LoRA Manager: All services initialized and background tasks scheduled")
logger.debug(
"LoRA Manager: All services initialized and background tasks scheduled"
)
except Exception as e:
logger.error(f"LoRA Manager: Error initializing services: {e}", exc_info=True)
logger.error(
f"LoRA Manager: Error initializing services: {e}", exc_info=True
)
@classmethod
async def _run_post_initialization_tasks(cls, init_tasks):
"""Run post-initialization tasks after all scanners complete"""
try:
logger.debug("LoRA Manager: Waiting for scanner initialization to complete...")
logger.debug(
"LoRA Manager: Waiting for scanner initialization to complete..."
)
# Wait for all scanner initialization tasks to complete
await asyncio.gather(*init_tasks, return_exceptions=True)
logger.debug("LoRA Manager: Scanner initialization completed, starting post-initialization tasks...")
logger.debug(
"LoRA Manager: Scanner initialization completed, starting post-initialization tasks..."
)
# Run post-initialization tasks
post_tasks = [
asyncio.create_task(cls._cleanup_backup_files(), name='cleanup_bak_files'),
asyncio.create_task(
cls._cleanup_backup_files(), name="cleanup_bak_files"
),
# Add more post-initialization tasks here as needed
# asyncio.create_task(cls._another_post_task(), name='another_task'),
]
@@ -234,14 +304,20 @@ class LoraManager:
for i, result in enumerate(results):
task_name = post_tasks[i].get_name()
if isinstance(result, Exception):
logger.error(f"Post-initialization task '{task_name}' failed: {result}")
logger.error(
f"Post-initialization task '{task_name}' failed: {result}"
)
else:
logger.debug(f"Post-initialization task '{task_name}' completed successfully")
logger.debug(
f"Post-initialization task '{task_name}' completed successfully"
)
logger.debug("LoRA Manager: All post-initialization tasks completed")
except Exception as e:
logger.error(f"LoRA Manager: Error in post-initialization tasks: {e}", exc_info=True)
logger.error(
f"LoRA Manager: Error in post-initialization tasks: {e}", exc_info=True
)
@classmethod
async def _cleanup_backup_files(cls):
@@ -252,8 +328,8 @@ class LoraManager:
# Collect all model roots
all_roots = set()
all_roots.update(config.loras_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.embeddings_roots)
all_roots.update(config.base_models_roots or [])
all_roots.update(config.embeddings_roots or [])
total_deleted = 0
total_size_freed = 0
@@ -263,12 +339,17 @@ class LoraManager:
continue
try:
deleted_count, size_freed = await cls._cleanup_backup_files_in_directory(root_path)
(
deleted_count,
size_freed,
) = await cls._cleanup_backup_files_in_directory(root_path)
total_deleted += deleted_count
total_size_freed += size_freed
if deleted_count > 0:
logger.debug(f"Cleaned up {deleted_count} .bak files in {root_path} (freed {size_freed / (1024*1024):.2f} MB)")
logger.debug(
f"Cleaned up {deleted_count} .bak files in {root_path} (freed {size_freed / (1024 * 1024):.2f} MB)"
)
except Exception as e:
logger.error(f"Error cleaning up .bak files in {root_path}: {e}")
@@ -277,7 +358,9 @@ class LoraManager:
await asyncio.sleep(0.01)
if total_deleted > 0:
logger.debug(f"Backup cleanup completed: removed {total_deleted} .bak files, freed {total_size_freed / (1024*1024):.2f} MB total")
logger.debug(
f"Backup cleanup completed: removed {total_deleted} .bak files, freed {total_size_freed / (1024 * 1024):.2f} MB total"
)
else:
logger.debug("Backup cleanup completed: no .bak files found")
@@ -310,7 +393,9 @@ class LoraManager:
with os.scandir(path) as it:
for entry in it:
try:
if entry.is_file(follow_symlinks=True) and entry.name.endswith('.bak'):
if entry.is_file(
follow_symlinks=True
) and entry.name.endswith(".bak"):
file_size = entry.stat().st_size
os.remove(entry.path)
deleted_count += 1
@@ -321,7 +406,9 @@ class LoraManager:
cleanup_recursive(entry.path)
except Exception as e:
logger.warning(f"Could not delete .bak file {entry.path}: {e}")
logger.warning(
f"Could not delete .bak file {entry.path}: {e}"
)
except Exception as e:
logger.error(f"Error scanning directory {path} for .bak files: {e}")
@@ -339,21 +426,21 @@ class LoraManager:
service = ExampleImagesCleanupService()
result = await service.cleanup_example_image_folders()
if result.get('success'):
if result.get("success"):
logger.debug(
"Manual example images cleanup completed: moved=%s",
result.get('moved_total'),
result.get("moved_total"),
)
elif result.get('partial_success'):
elif result.get("partial_success"):
logger.warning(
"Manual example images cleanup partially succeeded: moved=%s failures=%s",
result.get('moved_total'),
result.get('move_failures'),
result.get("moved_total"),
result.get("move_failures"),
)
else:
logger.debug(
"Manual example images cleanup skipped or failed: %s",
result.get('error', 'no changes'),
result.get("error", "no changes"),
)
return result
@@ -361,9 +448,9 @@ class LoraManager:
except Exception as e: # pragma: no cover - defensive guard
logger.error(f"Error during example images cleanup: {e}", exc_info=True)
return {
'success': False,
'error': str(e),
'error_code': 'unexpected_error',
"success": False,
"error": str(e),
"error_code": "unexpected_error",
}
@classmethod

View File

@@ -4,7 +4,10 @@ import logging
logger = logging.getLogger(__name__)
# Check if running in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = (
os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"
or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
)
if not standalone_mode:
from .metadata_hook import MetadataHook
@@ -19,7 +22,7 @@ if not standalone_mode:
logger.info("ComfyUI Metadata Collector initialized")
def get_metadata(prompt_id=None):
def get_metadata(prompt_id=None): # type: ignore[no-redef]
"""Helper function to get metadata from the registry"""
registry = MetadataRegistry()
return registry.get_metadata(prompt_id)
@@ -28,6 +31,6 @@ else:
def init():
logger.info("ComfyUI Metadata Collector disabled in standalone mode")
def get_metadata(prompt_id=None):
def get_metadata(prompt_id=None): # type: ignore[no-redef]
"""Dummy implementation for standalone mode"""
return {}

View File

@@ -149,9 +149,12 @@ class MetadataHook:
# Store the original _async_map_node_over_list function
original_map_node_over_list = getattr(execution, map_node_func_name)
# Wrapped async function, compatible with both stable and nightly
async def async_map_node_over_list_with_metadata(prompt_id, unique_id, obj, input_data_all, func, allow_interrupt=False, execution_block_cb=None, pre_execute_cb=None, *args, **kwargs):
hidden_inputs = kwargs.get('hidden_inputs', None)
# Wrapped async function - signature must exactly match _async_map_node_over_list
async def async_map_node_over_list_with_metadata(
prompt_id, unique_id, obj, input_data_all, func,
allow_interrupt=False, execution_block_cb=None,
pre_execute_cb=None, v3_data=None
):
# Only collect metadata when calling the main function of nodes
if func == obj.FUNCTION and hasattr(obj, '__class__'):
try:
@@ -164,10 +167,10 @@ class MetadataHook:
except Exception as e:
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
# Call original function with all args/kwargs
# Call original function with exact parameters
results = await original_map_node_over_list(
prompt_id, unique_id, obj, input_data_all, func,
allow_interrupt, execution_block_cb, pre_execute_cb, *args, **kwargs
allow_interrupt, execution_block_cb, pre_execute_cb, v3_data=v3_data
)
if func == obj.FUNCTION and hasattr(obj, '__class__'):

View File

@@ -353,49 +353,100 @@ class MetadataProcessor:
# Check if we have stored conditioning objects for this sampler
if sampler_id in metadata.get(PROMPTS, {}) and (
"pos_conditioning" in metadata[PROMPTS][sampler_id] or
"neg_conditioning" in metadata[PROMPTS][sampler_id]):
"neg_conditioning" in metadata[PROMPTS][sampler_id]
):
pos_conditioning = metadata[PROMPTS][sampler_id].get("pos_conditioning")
neg_conditioning = metadata[PROMPTS][sampler_id].get("neg_conditioning")
# Helper function to recursively find prompt text for a conditioning object
def find_prompt_text_for_conditioning(conditioning_obj, is_positive=True):
def extend_unique(target, values):
for value in values:
if value and value not in target:
target.append(value)
# Helper function to recursively find prompt texts for a conditioning object.
# Transform nodes can map one output conditioning to multiple source conditionings.
def find_prompt_texts_for_conditioning(
conditioning_obj, is_positive=True, visited=None
):
if conditioning_obj is None:
return ""
return []
if visited is None:
visited = set()
conditioning_id = id(conditioning_obj)
if conditioning_id in visited:
return []
visited.add(conditioning_id)
prompt_texts = []
# Try to match conditioning objects with those stored by extractors
for prompt_node_id, prompt_data in metadata[PROMPTS].items():
# For nodes with single conditioning output
if "conditioning" in prompt_data:
if id(prompt_data["conditioning"]) == id(conditioning_obj):
return prompt_data.get("text", "")
if not isinstance(prompt_data, dict):
continue
# For nodes with separate pos_conditioning and neg_conditioning outputs (like TSC_EfficientLoader)
if is_positive and "positive_encoded" in prompt_data:
if id(prompt_data["positive_encoded"]) == id(conditioning_obj):
if "positive_text" in prompt_data:
return prompt_data["positive_text"]
else:
orig_conditioning = prompt_data.get("orig_pos_cond", None)
if orig_conditioning is not None:
# Recursively find the prompt text for the original conditioning
return find_prompt_text_for_conditioning(orig_conditioning, is_positive=True)
# For CLIP text nodes with a single conditioning output.
if id(prompt_data.get("conditioning")) == conditioning_id:
text = prompt_data.get("text", "")
if text:
extend_unique(prompt_texts, [text])
if not is_positive and "negative_encoded" in prompt_data:
if id(prompt_data["negative_encoded"]) == id(conditioning_obj):
if "negative_text" in prompt_data:
return prompt_data["negative_text"]
else:
orig_conditioning = prompt_data.get("orig_neg_cond", None)
if orig_conditioning is not None:
# Recursively find the prompt text for the original conditioning
return find_prompt_text_for_conditioning(orig_conditioning, is_positive=False)
# Generic provenance for passthrough/transform/combine nodes.
for source in prompt_data.get("conditioning_sources", []):
if id(source.get("output")) != conditioning_id:
continue
for input_conditioning in source.get("inputs", []):
extend_unique(
prompt_texts,
find_prompt_texts_for_conditioning(
input_conditioning, is_positive, visited
),
)
return ""
# For nodes with separate pos_conditioning and neg_conditioning outputs
# like TSC_EfficientLoader and existing ControlNet-style metadata.
if (
is_positive
and id(prompt_data.get("positive_encoded")) == conditioning_id
):
if prompt_data.get("positive_text"):
extend_unique(prompt_texts, [prompt_data["positive_text"]])
else:
extend_unique(
prompt_texts,
find_prompt_texts_for_conditioning(
prompt_data.get("orig_pos_cond"),
is_positive=True,
visited=visited,
),
)
if (
not is_positive
and id(prompt_data.get("negative_encoded")) == conditioning_id
):
if prompt_data.get("negative_text"):
extend_unique(prompt_texts, [prompt_data["negative_text"]])
else:
extend_unique(
prompt_texts,
find_prompt_texts_for_conditioning(
prompt_data.get("orig_neg_cond"),
is_positive=False,
visited=visited,
),
)
return prompt_texts
# Find prompt texts using the helper function
result["prompt"] = find_prompt_text_for_conditioning(pos_conditioning, is_positive=True)
result["negative_prompt"] = find_prompt_text_for_conditioning(neg_conditioning, is_positive=False)
result["prompt"] = ", ".join(
find_prompt_texts_for_conditioning(pos_conditioning, is_positive=True)
)
result["negative_prompt"] = ", ".join(
find_prompt_texts_for_conditioning(neg_conditioning, is_positive=False)
)
return result
@@ -509,8 +560,14 @@ class MetadataProcessor:
params["loras"] = " ".join(lora_parts)
# Set default clip_skip value
params["clip_skip"] = "1" # Common default
# Extract clip_skip from any SAMPLING node that provides it
for sampler_info in metadata.get(SAMPLING, {}).values():
clip_skip = sampler_info.get("parameters", {}).get("clip_skip")
if clip_skip is not None:
params["clip_skip"] = clip_skip
break
if params["clip_skip"] is None:
params["clip_skip"] = "1"
return params
@@ -595,6 +652,15 @@ class MetadataProcessor:
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
else:
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
# Generic guider nodes often expose separate positive/negative inputs.
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "positive", max_depth=10)
if not positive_node_id:
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "negative", max_depth=10)
if not negative_node_id:
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")

View File

@@ -1,10 +1,12 @@
import time
from nodes import NODE_CLASS_MAPPINGS
from nodes import NODE_CLASS_MAPPINGS # type: ignore
from .node_extractors import NODE_EXTRACTORS, GenericNodeExtractor
from .constants import METADATA_CATEGORIES, IMAGES
class MetadataRegistry:
"""A singleton registry to store and retrieve workflow metadata"""
_instance = None
def __new__(cls):
@@ -37,11 +39,13 @@ class MetadataRegistry:
# Sort all prompt_ids by timestamp
sorted_prompts = sorted(
self.prompt_metadata.keys(),
key=lambda pid: self.prompt_metadata[pid].get("timestamp", 0)
key=lambda pid: self.prompt_metadata[pid].get("timestamp", 0),
)
# Remove oldest records
prompts_to_remove = sorted_prompts[:len(sorted_prompts) - self.max_prompt_history]
prompts_to_remove = sorted_prompts[
: len(sorted_prompts) - self.max_prompt_history
]
for pid in prompts_to_remove:
del self.prompt_metadata[pid]
@@ -53,11 +57,13 @@ class MetadataRegistry:
category: {} for category in METADATA_CATEGORIES
}
# Add additional metadata fields
self.prompt_metadata[prompt_id].update({
"execution_order": [],
"current_prompt": None, # Will store the prompt object
"timestamp": time.time()
})
self.prompt_metadata[prompt_id].update(
{
"execution_order": [],
"current_prompt": None, # Will store the prompt object
"timestamp": time.time(),
}
)
# Clean up old prompt data
self._clean_old_prompts()
@@ -125,7 +131,9 @@ class MetadataRegistry:
for category in self.metadata_categories:
if category in cached_data and node_id in cached_data[category]:
if node_id not in metadata[category]:
metadata[category][node_id] = cached_data[category][node_id]
metadata[category][node_id] = cached_data[category][
node_id
]
def record_node_execution(self, node_id, class_type, inputs, outputs):
"""Record information about a node's execution"""
@@ -135,7 +143,9 @@ class MetadataRegistry:
# Add to execution order and mark as executed
if node_id not in self.executed_nodes:
self.executed_nodes.add(node_id)
self.prompt_metadata[self.current_prompt_id]["execution_order"].append(node_id)
self.prompt_metadata[self.current_prompt_id]["execution_order"].append(
node_id
)
# Process inputs to simplify working with them
processed_inputs = {}
@@ -152,7 +162,7 @@ class MetadataRegistry:
node_id,
processed_inputs,
outputs,
self.prompt_metadata[self.current_prompt_id]
self.prompt_metadata[self.current_prompt_id],
)
# Cache this node's metadata
@@ -168,11 +178,9 @@ class MetadataRegistry:
# Use the same extractor to update with outputs
extractor = NODE_EXTRACTORS.get(class_type, GenericNodeExtractor)
if hasattr(extractor, 'update'):
if hasattr(extractor, "update"):
extractor.update(
node_id,
processed_outputs,
self.prompt_metadata[self.current_prompt_id]
node_id, processed_outputs, self.prompt_metadata[self.current_prompt_id]
)
# Update the cached metadata for this node
@@ -214,7 +222,7 @@ class MetadataRegistry:
# Find cache keys that are no longer needed
keys_to_remove = []
for cache_key in self.node_cache:
node_id = cache_key.split(':')[0]
node_id = cache_key.split(":")[0]
if node_id not in active_node_ids:
keys_to_remove.append(cache_key)
@@ -270,7 +278,10 @@ class MetadataRegistry:
if IMAGES in cached_data and node_id in cached_data[IMAGES]:
image_data = cached_data[IMAGES][node_id]["image"]
# Handle different image formats
if isinstance(image_data, (list, tuple)) and len(image_data) > 0:
if (
isinstance(image_data, (list, tuple))
and len(image_data) > 0
):
return image_data[0]
return image_data

View File

@@ -1,4 +1,6 @@
import json
import os
import re
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES, IS_SAMPLER
@@ -142,6 +144,118 @@ class TSCCheckpointLoaderExtractor(NodeMetadataExtractor):
metadata[PROMPTS][node_id]["positive_encoded"] = positive_conditioning
metadata[PROMPTS][node_id]["negative_encoded"] = negative_conditioning
class EasyComfyLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
if "ckpt_name" in inputs:
_store_checkpoint_metadata(metadata, node_id, inputs["ckpt_name"])
# Only extract from optional_lora_stack — skip the single lora_name to
# avoid double-counting LoRAs that come through the LORA_STACK path.
active_loras = []
optional_lora_stack = inputs.get("optional_lora_stack")
if optional_lora_stack is not None and isinstance(optional_lora_stack, (list, tuple)):
for item in optional_lora_stack:
if isinstance(item, (list, tuple)) and len(item) >= 2:
lora_path = item[0]
model_strength = item[1]
lora_name = os.path.splitext(os.path.basename(lora_path))[0]
active_loras.append({
"name": lora_name,
"strength": model_strength
})
if active_loras:
metadata[LORAS][node_id] = {
"lora_list": active_loras,
"node_id": node_id
}
positive_text = inputs.get("positive", "")
negative_text = inputs.get("negative", "")
if positive_text or negative_text:
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
metadata[PROMPTS][node_id]["positive_text"] = positive_text
metadata[PROMPTS][node_id]["negative_text"] = negative_text
if "clip_skip" in inputs:
clip_skip = inputs["clip_skip"]
if node_id not in metadata[SAMPLING]:
metadata[SAMPLING][node_id] = {"parameters": {}, "node_id": node_id}
metadata[SAMPLING][node_id]["parameters"]["clip_skip"] = clip_skip
width = inputs.get("empty_latent_width")
height = inputs.get("empty_latent_height")
if width is not None and height is not None:
if SIZE not in metadata:
metadata[SIZE] = {}
metadata[SIZE][node_id] = {
"width": int(width),
"height": int(height),
"node_id": node_id
}
@staticmethod
def update(node_id, outputs, metadata):
# outputs: [(pipe_dict, model, vae), ...]
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
return
first_output = outputs[0]
if not isinstance(first_output, tuple) or len(first_output) < 1:
return
pipe = first_output[0]
if not isinstance(pipe, dict):
return
positive_conditioning = pipe.get("positive")
negative_conditioning = pipe.get("negative")
if positive_conditioning is not None or negative_conditioning is not None:
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
if positive_conditioning is not None:
metadata[PROMPTS][node_id]["positive_encoded"] = positive_conditioning
if negative_conditioning is not None:
metadata[PROMPTS][node_id]["negative_encoded"] = negative_conditioning
class EasyPreSamplingExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
sampling_params = {}
for key in ("steps", "cfg", "sampler_name", "scheduler", "denoise", "seed"):
if key in inputs:
sampling_params[key] = inputs[key]
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id,
IS_SAMPLER: True
}
class EasySeedExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "seed" not in inputs:
return
metadata[SAMPLING][node_id] = {
"parameters": {"seed": inputs["seed"]},
"node_id": node_id,
IS_SAMPLER: False
}
class CLIPTextEncodeExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
@@ -161,6 +275,251 @@ class CLIPTextEncodeExtractor(NodeMetadataExtractor):
conditioning = outputs[0][0]
metadata[PROMPTS][node_id]["conditioning"] = conditioning
class MyOriginalWaifuTextExtractor(NodeMetadataExtractor):
"""Extractor for ComfyUI-MyOriginalWaifu TextProvider nodes."""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
positive_text = inputs.get("positive", "")
negative_text = inputs.get("negative", "")
if positive_text or negative_text:
metadata[PROMPTS][node_id] = {
"positive_text": positive_text,
"negative_text": negative_text,
"node_id": node_id,
}
@staticmethod
def update(node_id, outputs, metadata):
output_tuple = _first_output_tuple(outputs)
if not output_tuple or len(output_tuple) < 2:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata["positive_text"] = output_tuple[0]
prompt_metadata["negative_text"] = output_tuple[1]
class MyOriginalWaifuClipExtractor(NodeMetadataExtractor):
"""Extractor for ComfyUI-MyOriginalWaifu ClipProvider nodes."""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
positive_text = inputs.get("positive", "")
negative_text = inputs.get("negative", "")
if positive_text or negative_text:
metadata[PROMPTS][node_id] = {
"positive_text": positive_text,
"negative_text": negative_text,
"node_id": node_id,
}
@staticmethod
def update(node_id, outputs, metadata):
output_tuple = _first_output_tuple(outputs)
if not output_tuple or len(output_tuple) < 2:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata["positive_encoded"] = output_tuple[0]
prompt_metadata["negative_encoded"] = output_tuple[1]
def _ensure_prompt_metadata(metadata, node_id):
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
return metadata[PROMPTS][node_id]
def _first_output_tuple(outputs):
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
return None
first_output = outputs[0]
if isinstance(first_output, tuple):
return first_output
return None
def _record_conditioning_source(
metadata, node_id, output_conditioning, input_conditionings
):
if output_conditioning is None:
return
sources = [
conditioning for conditioning in input_conditionings if conditioning is not None
]
if not sources:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata.setdefault("conditioning_sources", []).append(
{
"output": output_conditioning,
"inputs": sources,
}
)
def _get_variable_name(inputs):
for key in ("key", "name", "variable_name", "tag", "text"):
value = inputs.get(key)
if isinstance(value, str) and value:
return value
return None
def _get_node_variable_name(metadata, node_id, inputs):
variable_name = _get_variable_name(inputs)
if variable_name:
return variable_name
prompt = metadata.get("current_prompt")
original_prompt = getattr(prompt, "original_prompt", None)
if not original_prompt or node_id not in original_prompt:
return None
node_data = original_prompt[node_id]
variable_name = _get_variable_name(node_data.get("inputs", {}))
if variable_name:
return variable_name
widgets_values = node_data.get("widgets_values", [])
if widgets_values and isinstance(widgets_values[0], str):
return widgets_values[0]
return None
class ControlNetApplyAdvancedExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
if inputs.get("positive") is not None:
prompt_metadata["orig_pos_cond"] = inputs["positive"]
if inputs.get("negative") is not None:
prompt_metadata["orig_neg_cond"] = inputs["negative"]
@staticmethod
def update(node_id, outputs, metadata):
output_tuple = _first_output_tuple(outputs)
if not output_tuple:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
positive_input = prompt_metadata.get("orig_pos_cond")
negative_input = prompt_metadata.get("orig_neg_cond")
if len(output_tuple) >= 1:
prompt_metadata["positive_encoded"] = output_tuple[0]
_record_conditioning_source(
metadata, node_id, output_tuple[0], [positive_input]
)
if len(output_tuple) >= 2:
prompt_metadata["negative_encoded"] = output_tuple[1]
_record_conditioning_source(
metadata, node_id, output_tuple[1], [negative_input]
)
class ConditioningCombineExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
input_conditionings = []
for input_name in inputs:
if (
input_name.startswith("conditioning")
and inputs[input_name] is not None
):
input_conditionings.append(inputs[input_name])
if input_conditionings:
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata["orig_conditionings"] = input_conditionings
@staticmethod
def update(node_id, outputs, metadata):
output_tuple = _first_output_tuple(outputs)
if not output_tuple or len(output_tuple) < 1:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
output_conditioning = output_tuple[0]
prompt_metadata["conditioning"] = output_conditioning
_record_conditioning_source(
metadata,
node_id,
output_conditioning,
prompt_metadata.get("orig_conditionings", []),
)
class SetNodeExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
variable_name = _get_node_variable_name(metadata, node_id, inputs)
conditioning = inputs.get("CONDITIONING")
if conditioning is None:
conditioning = inputs.get("conditioning")
if conditioning is None:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata["conditioning"] = conditioning
if variable_name:
prompt_metadata["variable_name"] = variable_name
metadata[PROMPTS].setdefault("__conditioning_variables__", {})[
variable_name
] = conditioning
class GetNodeExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
variable_name = _get_node_variable_name(metadata, node_id, inputs or {})
if variable_name:
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
prompt_metadata["variable_name"] = variable_name
@staticmethod
def update(node_id, outputs, metadata):
output_tuple = _first_output_tuple(outputs)
if not output_tuple or len(output_tuple) < 1:
return
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
output_conditioning = output_tuple[0]
prompt_metadata["conditioning"] = output_conditioning
variable_name = prompt_metadata.get("variable_name")
if not variable_name:
return
input_conditioning = metadata[PROMPTS].get("__conditioning_variables__", {}).get(
variable_name
)
_record_conditioning_source(
metadata, node_id, output_conditioning, [input_conditioning]
)
# Base Sampler Extractor to reduce code redundancy
class BaseSamplerExtractor(NodeMetadataExtractor):
"""Base extractor for sampler nodes with common functionality"""
@@ -427,6 +786,75 @@ class ImageSizeExtractor(NodeMetadataExtractor):
"node_id": node_id
}
class RgthreePowerLoraLoaderExtractor(NodeMetadataExtractor):
"""Extract LoRA metadata from rgthree Power Lora Loader.
The node passes LoRAs as dynamic kwargs: LORA_1, LORA_2, ... each containing
{'on': bool, 'lora': filename, 'strength': float, 'strengthTwo': float}.
"""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
active_loras = []
for key, value in inputs.items():
if not key.upper().startswith('LORA_'):
continue
if not isinstance(value, dict):
continue
if not value.get('on') or not value.get('lora'):
continue
lora_name = os.path.splitext(os.path.basename(value['lora']))[0]
active_loras.append({
"name": lora_name,
"strength": round(float(value.get('strength', 1.0)), 2)
})
if active_loras:
metadata[LORAS][node_id] = {
"lora_list": active_loras,
"node_id": node_id
}
class TensorRTLoaderExtractor(NodeMetadataExtractor):
"""Extract checkpoint metadata from TensorRT Loader.
extract() parses the engine filename from 'unet_name' as a best-effort
fallback (strips profile suffix after '_$' and counter suffix).
update() checks if the output MODEL has attachments["source_model"]
set by the node (NubeBuster fork) and overrides with the real name.
Vanilla TRT doesn't set this — the filename parse stands.
"""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "unet_name" not in inputs:
return
unet_name = inputs.get("unet_name")
# Strip path and extension, then drop the $_profile suffix
model_name = os.path.splitext(os.path.basename(unet_name))[0]
if "_$" in model_name:
model_name = model_name[:model_name.index("_$")]
# Strip counter suffix (e.g. _00001_) left by ComfyUI's save path
model_name = re.sub(r'_\d+_?$', '', model_name)
_store_checkpoint_metadata(metadata, node_id, model_name)
@staticmethod
def update(node_id, outputs, metadata):
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
return
first_output = outputs[0]
if not isinstance(first_output, tuple) or len(first_output) < 1:
return
model = first_output[0]
# NubeBuster fork sets attachments["source_model"] on the ModelPatcher
source_model = getattr(model, 'attachments', {}).get("source_model")
if source_model:
_store_checkpoint_metadata(metadata, node_id, source_model)
class LoraLoaderManagerExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
@@ -577,8 +1005,6 @@ class SamplerCustomAdvancedExtractor(BaseSamplerExtractor):
# Extract latent dimensions
BaseSamplerExtractor.extract_latent_dimensions(node_id, inputs, metadata)
import json
class CLIPTextEncodeFluxExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
@@ -699,9 +1125,12 @@ NODE_EXTRACTORS = {
"KSamplerSelect": KSamplerSelectExtractor, # Add KSamplerSelect
"BasicScheduler": BasicSchedulerExtractor, # Add BasicScheduler
"AlignYourStepsScheduler": BasicSchedulerExtractor, # Add AlignYourStepsScheduler
# ComfyUI-Easy-Use pre-sampling / seed
"samplerSettings": EasyPreSamplingExtractor, # easy preSampling
"easySeed": EasySeedExtractor, # easy seed
# Loaders
"CheckpointLoaderSimple": CheckpointLoaderExtractor,
"comfyLoader": CheckpointLoaderExtractor, # easy comfyLoader
"comfyLoader": EasyComfyLoaderExtractor, # ComfyUI-Easy-Use easy comfyLoader
"CheckpointLoaderSimpleWithImages": CheckpointLoaderExtractor, # CheckpointLoader|pysssss
"TSC_EfficientLoader": TSCCheckpointLoaderExtractor, # Efficient Nodes
"NunchakuFluxDiTLoader": NunchakuFluxDiTLoaderExtractor, # ComfyUI-Nunchaku
@@ -711,12 +1140,17 @@ NODE_EXTRACTORS = {
"GGUFLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
"DiffusionModelLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
"CheckpointLoaderKJ": CheckpointLoaderExtractor, # KJNodes
"CheckpointLoaderLM": CheckpointLoaderExtractor, # LoRA Manager
"UNETLoader": UNETLoaderExtractor, # Updated to use dedicated extractor
"UnetLoaderGGUF": UNETLoaderExtractor, # Updated to use dedicated extractor
"UNETLoaderLM": UNETLoaderExtractor, # LoRA Manager
"LoraLoader": LoraLoaderExtractor,
"LoraLoaderLM": LoraLoaderManagerExtractor,
"RgthreePowerLoraLoader": RgthreePowerLoraLoaderExtractor,
"TensorRTLoader": TensorRTLoaderExtractor,
# Conditioning
"CLIPTextEncode": CLIPTextEncodeExtractor,
"CLIPTextEncodeAttentionBias": CLIPTextEncodeExtractor, # From https://github.com/silveroxides/ComfyUI_PromptAttention
"PromptLM": CLIPTextEncodeExtractor,
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
@@ -724,6 +1158,12 @@ NODE_EXTRACTORS = {
"smZ_CLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/shiimizu/ComfyUI_smZNodes
"CR_ApplyControlNetStack": CR_ApplyControlNetStackExtractor, # Add CR_ApplyControlNetStack
"PCTextEncode": CLIPTextEncodeExtractor, # From https://github.com/asagi4/comfyui-prompt-control
"TextProvider": MyOriginalWaifuTextExtractor, # ComfyUI-MyOriginalWaifu
"ClipProvider": MyOriginalWaifuClipExtractor, # ComfyUI-MyOriginalWaifu
"ControlNetApplyAdvanced": ControlNetApplyAdvancedExtractor,
"ConditioningCombine": ConditioningCombineExtractor,
"SetNode": SetNodeExtractor,
"GetNode": GetNodeExtractor,
# Latent
"EmptyLatentImage": ImageSizeExtractor,
# Flux

View File

@@ -4,15 +4,21 @@ from typing import Awaitable, Callable, Dict, List
from aiohttp import web
# Use wildcard for CivitAI to support their CDN subdomains (e.g., image-b2.civitai.com)
# Security note: This is acceptable because:
# 1. CSP img-src only controls image/video loading, not script execution
# 2. All *.civitai.com subdomains are controlled by Civitai
# 3. Explicit domain list would require constant updates as Civitai adds CDN nodes
REMOTE_MEDIA_SOURCES = (
"https://image.civitai.com",
"https://*.civitai.com",
"https://img.genur.art",
)
@web.middleware
async def relax_csp_for_remote_media(
request: web.Request, handler: Callable[[web.Request], Awaitable[web.StreamResponse]]
request: web.Request,
handler: Callable[[web.Request], Awaitable[web.StreamResponse]],
) -> web.StreamResponse:
"""Allow LoRA Manager media previews to load from trusted remote domains.
@@ -43,7 +49,9 @@ async def relax_csp_for_remote_media(
directive_order.append(name)
directives[name] = values
def merge_sources(name: str, sources: List[str], defaults: List[str] | None = None) -> None:
def merge_sources(
name: str, sources: List[str], defaults: List[str] | None = None
) -> None:
existing = directives.get(name, list(defaults or []))
for source in sources:

View File

@@ -0,0 +1,118 @@
import logging
from typing import List, Tuple
import comfy.sd # type: ignore
import folder_paths # type: ignore
from ..utils.utils import get_checkpoint_info_absolute, _format_model_name_for_comfyui
logger = logging.getLogger(__name__)
class CheckpointLoaderLM:
"""Checkpoint Loader with support for extra folder paths
Loads checkpoints from both standard ComfyUI folders and LoRA Manager's
extra folder paths, providing a unified interface for checkpoint loading.
"""
NAME = "Checkpoint Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@classmethod
def INPUT_TYPES(s):
# Get list of checkpoint names from scanner (includes extra folder paths)
checkpoint_names = s._get_checkpoint_names()
return {
"required": {
"ckpt_name": (
checkpoint_names,
{"tooltip": "The name of the checkpoint (model) to load."},
),
}
}
RETURN_TYPES = ("MODEL", "CLIP", "VAE")
RETURN_NAMES = ("MODEL", "CLIP", "VAE")
OUTPUT_TOOLTIPS = (
"The model used for denoising latents.",
"The CLIP model used for encoding text prompts.",
"The VAE model used for encoding and decoding images to and from latent space.",
)
FUNCTION = "load_checkpoint"
@classmethod
def _get_checkpoint_names(cls) -> List[str]:
"""Get list of checkpoint names from scanner cache in ComfyUI format (relative path with extension)"""
try:
from ..services.service_registry import ServiceRegistry
import asyncio
async def _get_names():
scanner = await ServiceRegistry.get_checkpoint_scanner()
cache = await scanner.get_cached_data()
# Get all model roots for calculating relative paths
model_roots = scanner.get_model_roots()
# Filter only checkpoint type (not diffusion_model) and format names
names = []
for item in cache.raw_data:
if item.get("sub_type") == "checkpoint":
file_path = item.get("file_path", "")
if file_path:
# Format using relative path with OS-native separator
formatted_name = _format_model_name_for_comfyui(
file_path, model_roots
)
if formatted_name:
names.append(formatted_name)
return sorted(names)
try:
loop = asyncio.get_running_loop()
import concurrent.futures
def run_in_thread():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
return new_loop.run_until_complete(_get_names())
finally:
new_loop.close()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(run_in_thread)
return future.result()
except RuntimeError:
return asyncio.run(_get_names())
except Exception as e:
logger.error(f"Error getting checkpoint names: {e}")
return []
def load_checkpoint(self, ckpt_name: str) -> Tuple:
"""Load a checkpoint by name, supporting extra folder paths
Args:
ckpt_name: The name of the checkpoint to load (relative path with extension)
Returns:
Tuple of (MODEL, CLIP, VAE)
"""
# Get absolute path from cache using ComfyUI-style name
ckpt_path, metadata = get_checkpoint_info_absolute(ckpt_name)
if metadata is None:
raise FileNotFoundError(
f"Checkpoint '{ckpt_name}' not found in LoRA Manager cache. "
"Make sure the checkpoint is indexed and try again."
)
# Load regular checkpoint using ComfyUI's API
logger.info(f"Loading checkpoint from: {ckpt_path}")
out = comfy.sd.load_checkpoint_guess_config(
ckpt_path,
output_vae=True,
output_clip=True,
embedding_directory=folder_paths.get_folder_paths("embeddings"),
)
return out[:3]

View File

@@ -0,0 +1,161 @@
"""
Helper module to safely import ComfyUI-GGUF modules.
This module provides a robust way to import ComfyUI-GGUF functionality
regardless of how ComfyUI loaded it.
"""
import sys
import os
import importlib.util
import logging
from typing import Optional, Tuple, Any
logger = logging.getLogger(__name__)
def _get_gguf_path() -> str:
"""Get the path to ComfyUI-GGUF based on this file's location.
Since ComfyUI-Lora-Manager and ComfyUI-GGUF are both in custom_nodes/,
we can derive the GGUF path from our own location.
"""
# This file is at: custom_nodes/ComfyUI-Lora-Manager/py/nodes/gguf_import_helper.py
# ComfyUI-GGUF is at: custom_nodes/ComfyUI-GGUF
current_file = os.path.abspath(__file__)
# Go up 4 levels: nodes -> py -> ComfyUI-Lora-Manager -> custom_nodes
custom_nodes_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(current_file)))
)
return os.path.join(custom_nodes_dir, "ComfyUI-GGUF")
def _find_gguf_module() -> Optional[Any]:
"""Find ComfyUI-GGUF module in sys.modules.
ComfyUI registers modules using the full path with dots replaced by _x_.
"""
gguf_path = _get_gguf_path()
sys_module_name = gguf_path.replace(".", "_x_")
logger.debug(f"[GGUF Import] Looking for module '{sys_module_name}' in sys.modules")
if sys_module_name in sys.modules:
logger.info(f"[GGUF Import] Found module: '{sys_module_name}'")
return sys.modules[sys_module_name]
logger.debug(f"[GGUF Import] Module not found: '{sys_module_name}'")
return None
def _load_gguf_modules_directly() -> Optional[Any]:
"""Load ComfyUI-GGUF modules directly from file paths."""
gguf_path = _get_gguf_path()
logger.info(f"[GGUF Import] Direct Load: Attempting to load from '{gguf_path}'")
if not os.path.exists(gguf_path):
logger.warning(f"[GGUF Import] Path does not exist: {gguf_path}")
return None
try:
namespace = "ComfyUI_GGUF_Dynamic"
init_path = os.path.join(gguf_path, "__init__.py")
if not os.path.exists(init_path):
logger.warning(f"[GGUF Import] __init__.py not found at '{init_path}'")
return None
logger.debug(f"[GGUF Import] Loading from '{init_path}'")
spec = importlib.util.spec_from_file_location(namespace, init_path)
if not spec or not spec.loader:
logger.error(f"[GGUF Import] Failed to create spec for '{init_path}'")
return None
package = importlib.util.module_from_spec(spec)
package.__path__ = [gguf_path]
sys.modules[namespace] = package
spec.loader.exec_module(package)
logger.debug(f"[GGUF Import] Loaded main package '{namespace}'")
# Load submodules
loaded = []
for submod_name in ["loader", "ops", "nodes"]:
submod_path = os.path.join(gguf_path, f"{submod_name}.py")
if os.path.exists(submod_path):
submod_spec = importlib.util.spec_from_file_location(
f"{namespace}.{submod_name}", submod_path
)
if submod_spec and submod_spec.loader:
submod = importlib.util.module_from_spec(submod_spec)
submod.__package__ = namespace
sys.modules[f"{namespace}.{submod_name}"] = submod
submod_spec.loader.exec_module(submod)
setattr(package, submod_name, submod)
loaded.append(submod_name)
logger.debug(f"[GGUF Import] Loaded submodule '{submod_name}'")
logger.info(f"[GGUF Import] Direct Load success: {loaded}")
return package
except Exception as e:
logger.error(f"[GGUF Import] Direct Load failed: {e}", exc_info=True)
return None
def get_gguf_modules() -> Tuple[Any, Any, Any]:
"""Get ComfyUI-GGUF modules (loader, ops, nodes).
Returns:
Tuple of (loader_module, ops_module, nodes_module)
Raises:
RuntimeError: If ComfyUI-GGUF cannot be found or loaded.
"""
logger.debug("[GGUF Import] Starting module search...")
# Try to find already loaded module first
gguf_module = _find_gguf_module()
if gguf_module is None:
logger.info("[GGUF Import] Not found in sys.modules, trying direct load...")
gguf_module = _load_gguf_modules_directly()
if gguf_module is None:
raise RuntimeError(
"ComfyUI-GGUF is not installed. "
"Please install from https://github.com/city96/ComfyUI-GGUF"
)
# Extract submodules
loader = getattr(gguf_module, "loader", None)
ops = getattr(gguf_module, "ops", None)
nodes = getattr(gguf_module, "nodes", None)
if loader is None or ops is None or nodes is None:
missing = [
name
for name, mod in [("loader", loader), ("ops", ops), ("nodes", nodes)]
if mod is None
]
raise RuntimeError(f"ComfyUI-GGUF missing submodules: {missing}")
logger.debug("[GGUF Import] All modules loaded successfully")
return loader, ops, nodes
def get_gguf_sd_loader():
"""Get the gguf_sd_loader function from ComfyUI-GGUF."""
loader, _, _ = get_gguf_modules()
return getattr(loader, "gguf_sd_loader")
def get_ggml_ops():
"""Get the GGMLOps class from ComfyUI-GGUF."""
_, ops, _ = get_gguf_modules()
return getattr(ops, "GGMLOps")
def get_gguf_model_patcher():
"""Get the GGUFModelPatcher class from ComfyUI-GGUF."""
_, _, nodes = get_gguf_modules()
return getattr(nodes, "GGUFModelPatcher")

View File

@@ -8,6 +8,7 @@ and tracks the cycle progress which persists across workflow save/load.
import logging
import os
from ..utils.utils import get_lora_info
logger = logging.getLogger(__name__)
@@ -54,8 +55,14 @@ class LoraCyclerLM:
current_index = cycler_config.get("current_index", 1) # 1-based
model_strength = float(cycler_config.get("model_strength", 1.0))
clip_strength = float(cycler_config.get("clip_strength", 1.0))
use_same_clip_strength = cycler_config.get("use_same_clip_strength", True)
use_preset_strength = cycler_config.get("use_preset_strength", False)
preset_strength_scale = float(cycler_config.get("preset_strength_scale", 1.0))
sort_by = "filename"
# Include "no lora" option
include_no_lora = cycler_config.get("include_no_lora", False)
# Dual-index mechanism for batch queue synchronization
execution_index = cycler_config.get("execution_index") # Can be None
# next_index_from_config = cycler_config.get("next_index") # Not used on backend
@@ -71,7 +78,10 @@ class LoraCyclerLM:
total_count = len(lora_list)
if total_count == 0:
# Calculate effective total count (includes no lora option if enabled)
effective_total_count = total_count + 1 if include_no_lora else total_count
if total_count == 0 and not include_no_lora:
logger.warning("[LoraCyclerLM] No LoRAs available in pool")
return {
"result": ([],),
@@ -93,42 +103,99 @@ class LoraCyclerLM:
else:
actual_index = current_index
# Clamp index to valid range (1-based)
clamped_index = max(1, min(actual_index, total_count))
# Clamp index to valid range (1-based, includes no lora if enabled)
clamped_index = max(1, min(actual_index, effective_total_count))
# Get LoRA at current index (convert to 0-based for list access)
current_lora = lora_list[clamped_index - 1]
# Check if current index is the "no lora" option (last position when include_no_lora is True)
is_no_lora = include_no_lora and clamped_index == effective_total_count
# Build LORA_STACK with single LoRA
lora_path, _ = get_lora_info(current_lora["file_name"])
if not lora_path:
logger.warning(
f"[LoraCyclerLM] Could not find path for LoRA: {current_lora['file_name']}"
)
if is_no_lora:
# "No LoRA" option - return empty stack
lora_stack = []
current_lora_name = "No LoRA"
current_lora_filename = "No LoRA"
else:
# Normalize path separators
lora_path = lora_path.replace("/", os.sep)
lora_stack = [(lora_path, model_strength, clip_strength)]
# Get LoRA at current index (convert to 0-based for list access)
current_lora = lora_list[clamped_index - 1]
current_lora_name = current_lora["file_name"]
current_lora_filename = current_lora["file_name"]
# Build LORA_STACK with single LoRA
if current_lora["file_name"] == "None":
lora_path = None
else:
lora_path, _ = get_lora_info(current_lora["file_name"])
if not lora_path:
if current_lora["file_name"] != "None":
logger.warning(
f"[LoraCyclerLM] Could not find path for LoRA: {current_lora['file_name']}"
)
lora_stack = []
else:
# Normalize path separators
lora_path = lora_path.replace("/", os.sep)
if use_preset_strength:
lora_metadata = await lora_service.get_lora_metadata_by_filename(
current_lora["file_name"]
)
if lora_metadata:
recommended_strength = (
lora_service.get_recommended_strength_from_lora_data(
lora_metadata
)
)
if recommended_strength is not None:
model_strength = round(
recommended_strength * preset_strength_scale, 2
)
if use_same_clip_strength:
clip_strength = model_strength
else:
recommended_clip_strength = (
lora_service.get_recommended_clip_strength_from_lora_data(
lora_metadata
)
)
if recommended_clip_strength is not None:
clip_strength = round(
recommended_clip_strength * preset_strength_scale, 2
)
elif use_same_clip_strength:
clip_strength = model_strength
elif use_same_clip_strength:
clip_strength = model_strength
lora_stack = [(lora_path, model_strength, clip_strength)]
# Calculate next index (wrap to 1 if at end)
next_index = clamped_index + 1
if next_index > total_count:
if next_index > effective_total_count:
next_index = 1
# Get next LoRA for UI display (what will be used next generation)
next_lora = lora_list[next_index - 1]
next_display_name = next_lora["file_name"]
is_next_no_lora = include_no_lora and next_index == effective_total_count
if is_next_no_lora:
next_display_name = "No LoRA"
next_lora_filename = "No LoRA"
else:
next_lora = lora_list[next_index - 1]
next_display_name = next_lora["file_name"]
next_lora_filename = next_lora["file_name"]
return {
"result": (lora_stack,),
"ui": {
"current_index": [clamped_index],
"next_index": [next_index],
"total_count": [total_count],
"current_lora_name": [current_lora["file_name"]],
"current_lora_filename": [current_lora["file_name"]],
"total_count": [
total_count
], # Return actual LoRA count, not effective_total_count
"current_lora_name": [current_lora_name],
"current_lora_filename": [current_lora_filename],
"next_lora_name": [next_display_name],
"next_lora_filename": [next_lora["file_name"]],
"next_lora_filename": [next_lora_filename],
},
}

View File

@@ -1,12 +1,129 @@
import importlib
import logging
import re
import comfy.utils # type: ignore
import comfy.sd # type: ignore
import comfy.sd # type: ignore
import comfy.utils # type: ignore
from ..utils.utils import get_lora_info_absolute
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list, nunchaku_load_lora
from .utils import (
FlexibleOptionalInputType,
any_type,
detect_nunchaku_model_kind,
extract_lora_name,
get_loras_list,
nunchaku_load_lora,
)
logger = logging.getLogger(__name__)
def _get_nunchaku_load_qwen_loras():
try:
module = importlib.import_module(".nunchaku_qwen", __package__)
except ImportError as exc:
raise RuntimeError(
"Qwen-Image LoRA loading requires the ComfyUI runtime with its torch dependency available."
) from exc
return module.nunchaku_load_qwen_loras
def _collect_stack_entries(lora_stack):
entries = []
if not lora_stack:
return entries
for lora_path, model_strength, clip_strength in lora_stack:
lora_name = extract_lora_name(lora_path)
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
entries.append({
"name": lora_name,
"absolute_path": absolute_lora_path,
"input_path": lora_path,
"model_strength": float(model_strength),
"clip_strength": float(clip_strength),
"trigger_words": trigger_words,
})
return entries
def _collect_widget_entries(kwargs):
entries = []
for lora in get_loras_list(kwargs):
if not lora.get("active", False):
continue
lora_name = lora["name"]
model_strength = float(lora["strength"])
clip_strength = float(lora.get("clipStrength", model_strength))
lora_path, trigger_words = get_lora_info_absolute(lora_name)
entries.append({
"name": lora_name,
"absolute_path": lora_path,
"input_path": lora_path,
"model_strength": model_strength,
"clip_strength": clip_strength,
"trigger_words": trigger_words,
})
return entries
def _format_loaded_loras(loaded_loras):
formatted_loras = []
for item in loaded_loras:
if item["include_clip_strength"]:
formatted_loras.append(
f"<lora:{item['name']}:{item['model_strength']}:{item['clip_strength']}>"
)
else:
formatted_loras.append(f"<lora:{item['name']}:{item['model_strength']}>")
return " ".join(formatted_loras)
def _apply_entries(model, clip, lora_entries, nunchaku_model_kind):
loaded_loras = []
all_trigger_words = []
if nunchaku_model_kind == "qwen_image":
nunchaku_load_qwen_loras = _get_nunchaku_load_qwen_loras()
qwen_lora_configs = []
for entry in lora_entries:
qwen_lora_configs.append((entry["absolute_path"], entry["model_strength"]))
loaded_loras.append({
"name": entry["name"],
"model_strength": entry["model_strength"],
"clip_strength": entry["model_strength"],
"include_clip_strength": False,
})
all_trigger_words.extend(entry["trigger_words"])
if qwen_lora_configs:
model = nunchaku_load_qwen_loras(model, qwen_lora_configs)
return model, clip, loaded_loras, all_trigger_words
for entry in lora_entries:
if nunchaku_model_kind == "flux":
model = nunchaku_load_lora(model, entry["input_path"], entry["model_strength"])
else:
lora = comfy.utils.load_torch_file(entry["absolute_path"], safe_load=True)
model, clip = comfy.sd.load_lora_for_models(
model,
clip,
lora,
entry["model_strength"],
entry["clip_strength"],
)
include_clip_strength = nunchaku_model_kind is None and abs(entry["model_strength"] - entry["clip_strength"]) > 0.001
loaded_loras.append({
"name": entry["name"],
"model_strength": entry["model_strength"],
"clip_strength": entry["clip_strength"],
"include_clip_strength": include_clip_strength,
})
all_trigger_words.extend(entry["trigger_words"])
return model, clip, loaded_loras, all_trigger_words
class LoraLoaderLM:
NAME = "Lora Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@@ -16,7 +133,6 @@ class LoraLoaderLM:
return {
"required": {
"model": ("MODEL",),
# "clip": ("CLIP",),
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
@@ -31,107 +147,23 @@ class LoraLoaderLM:
def load_loras(self, model, text, **kwargs):
"""Loads multiple LoRAs based on the kwargs input and lora_stack."""
loaded_loras = []
all_trigger_words = []
del text
clip = kwargs.get("clip", None)
lora_entries = _collect_stack_entries(kwargs.get("lora_stack", None))
lora_entries.extend(_collect_widget_entries(kwargs))
clip = kwargs.get('clip', None)
lora_stack = kwargs.get('lora_stack', None)
nunchaku_model_kind = detect_nunchaku_model_kind(model)
if nunchaku_model_kind == "flux":
logger.info("Detected Nunchaku Flux model")
elif nunchaku_model_kind == "qwen_image":
logger.info("Detected Nunchaku Qwen-Image model")
# Check if model is a Nunchaku Flux model - simplified approach
is_nunchaku_model = False
try:
model_wrapper = model.model.diffusion_model
# Check if model is a Nunchaku Flux model using only class name
if model_wrapper.__class__.__name__ == "ComfyFluxWrapper":
is_nunchaku_model = True
logger.info("Detected Nunchaku Flux model")
except (AttributeError, TypeError):
# Not a model with the expected structure
pass
# First process lora_stack if available
if lora_stack:
for lora_path, model_strength, clip_strength in lora_stack:
# Extract lora name and convert to absolute path
# lora_stack stores relative paths, but load_torch_file needs absolute paths
lora_name = extract_lora_name(lora_path)
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# Use our custom function for Flux models
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged for Nunchaku models
else:
# Use lower-level API to load LoRA directly without folder_paths validation
lora = comfy.utils.load_torch_file(absolute_lora_path, safe_load=True)
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
all_trigger_words.extend(trigger_words)
# Add clip strength to output if different from model strength (except for Nunchaku models)
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Then process loras from kwargs with support for both old and new formats
loras_list = get_loras_list(kwargs)
for lora in loras_list:
if not lora.get('active', False):
continue
lora_name = lora['name']
model_strength = float(lora['strength'])
# Get clip strength - use model strength as default if not specified
clip_strength = float(lora.get('clipStrength', model_strength))
# Get lora path and trigger words
lora_path, trigger_words = get_lora_info_absolute(lora_name)
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# For Nunchaku models, use our custom function
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged
else:
# Use lower-level API to load LoRA directly without folder_paths validation
lora = comfy.utils.load_torch_file(lora_path, safe_load=True)
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
# Include clip strength in output if different from model strength and not a Nunchaku model
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
# use ',, ' to separate trigger words for group mode
model, clip, loaded_loras, all_trigger_words = _apply_entries(model, clip, lora_entries, nunchaku_model_kind)
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format loaded_loras with support for both formats
formatted_loras = []
for item in loaded_loras:
parts = item.split(":")
lora_name = parts[0]
strength_parts = parts[1].strip().split(",")
if len(strength_parts) > 1:
# Different model and clip strengths
model_str = strength_parts[0].strip()
clip_str = strength_parts[1].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
else:
# Same strength for both
model_str = strength_parts[0].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
formatted_loras_text = " ".join(formatted_loras)
formatted_loras_text = _format_loaded_loras(loaded_loras)
return (model, clip, trigger_words_text, formatted_loras_text)
class LoraTextLoaderLM:
NAME = "LoRA Text Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@@ -143,13 +175,13 @@ class LoraTextLoaderLM:
"model": ("MODEL",),
"lora_syntax": ("STRING", {
"forceInput": True,
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation"
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
}),
},
"optional": {
"clip": ("CLIP",),
"lora_stack": ("LORA_STACK",),
}
},
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
@@ -158,116 +190,40 @@ class LoraTextLoaderLM:
def parse_lora_syntax(self, text):
"""Parse LoRA syntax from text input."""
# Pattern to match <lora:name:strength> or <lora:name:model_strength:clip_strength>
pattern = r'<lora:([^:>]+):([^:>]+)(?::([^:>]+))?>'
pattern = r"<lora:([^:>]+):([^:>]+)(?::([^:>]+))?>"
matches = re.findall(pattern, text, re.IGNORECASE)
loras = []
for match in matches:
lora_name = match[0]
model_strength = float(match[1])
clip_strength = float(match[2]) if match[2] else model_strength
loras.append({
'name': lora_name,
'model_strength': model_strength,
'clip_strength': clip_strength
"name": match[0],
"model_strength": model_strength,
"clip_strength": float(match[2]) if match[2] else model_strength,
})
return loras
def load_loras_from_text(self, model, lora_syntax, clip=None, lora_stack=None):
"""Load LoRAs based on text syntax input."""
loaded_loras = []
all_trigger_words = []
lora_entries = _collect_stack_entries(lora_stack)
for lora in self.parse_lora_syntax(lora_syntax):
lora_path, trigger_words = get_lora_info_absolute(lora["name"])
lora_entries.append({
"name": lora["name"],
"absolute_path": lora_path,
"input_path": lora_path,
"model_strength": lora["model_strength"],
"clip_strength": lora["clip_strength"],
"trigger_words": trigger_words,
})
# Check if model is a Nunchaku Flux model - simplified approach
is_nunchaku_model = False
nunchaku_model_kind = detect_nunchaku_model_kind(model)
if nunchaku_model_kind == "flux":
logger.info("Detected Nunchaku Flux model")
elif nunchaku_model_kind == "qwen_image":
logger.info("Detected Nunchaku Qwen-Image model")
try:
model_wrapper = model.model.diffusion_model
# Check if model is a Nunchaku Flux model using only class name
if model_wrapper.__class__.__name__ == "ComfyFluxWrapper":
is_nunchaku_model = True
logger.info("Detected Nunchaku Flux model")
except (AttributeError, TypeError):
# Not a model with the expected structure
pass
# First process lora_stack if available
if lora_stack:
for lora_path, model_strength, clip_strength in lora_stack:
# Extract lora name and convert to absolute path
# lora_stack stores relative paths, but load_torch_file needs absolute paths
lora_name = extract_lora_name(lora_path)
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# Use our custom function for Flux models
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged for Nunchaku models
else:
# Use lower-level API to load LoRA directly without folder_paths validation
lora = comfy.utils.load_torch_file(absolute_lora_path, safe_load=True)
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
all_trigger_words.extend(trigger_words)
# Add clip strength to output if different from model strength (except for Nunchaku models)
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Parse and process LoRAs from text syntax
parsed_loras = self.parse_lora_syntax(lora_syntax)
for lora in parsed_loras:
lora_name = lora['name']
model_strength = lora['model_strength']
clip_strength = lora['clip_strength']
# Get lora path and trigger words
lora_path, trigger_words = get_lora_info_absolute(lora_name)
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# For Nunchaku models, use our custom function
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged
else:
# Use lower-level API to load LoRA directly without folder_paths validation
lora = comfy.utils.load_torch_file(lora_path, safe_load=True)
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
# Include clip strength in output if different from model strength and not a Nunchaku model
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
# use ',, ' to separate trigger words for group mode
model, clip, loaded_loras, all_trigger_words = _apply_entries(model, clip, lora_entries, nunchaku_model_kind)
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format loaded_loras with support for both formats
formatted_loras = []
for item in loaded_loras:
parts = item.split(":")
lora_name = parts[0].strip()
strength_parts = parts[1].strip().split(",")
if len(strength_parts) > 1:
# Different model and clip strengths
model_str = strength_parts[0].strip()
clip_str = strength_parts[1].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
else:
# Same strength for both
model_str = strength_parts[0].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
formatted_loras_text = " ".join(formatted_loras)
formatted_loras_text = _format_loaded_loras(loaded_loras)
return (model, clip, trigger_words_text, formatted_loras_text)

View File

@@ -82,6 +82,7 @@ class LoraPoolLM:
"folders": {"include": [], "exclude": []},
"favoritesOnly": False,
"license": {"noCreditRequired": False, "allowSelling": False},
"namePatterns": {"include": [], "exclude": [], "useRegex": False},
},
"preview": {"matchCount": 0, "lastUpdated": 0},
}

View File

@@ -7,10 +7,8 @@ and tracks the last used combination for reuse.
"""
import logging
import random
import os
from ..utils.utils import get_lora_info
from .utils import extract_lora_name
logger = logging.getLogger(__name__)

View File

@@ -0,0 +1,26 @@
class LoraStackCombinerLM:
NAME = "Lora Stack Combiner (LoraManager)"
CATEGORY = "Lora Manager/stackers"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"lora_stack_a": ("LORA_STACK",),
"lora_stack_b": ("LORA_STACK",),
},
}
RETURN_TYPES = ("LORA_STACK",)
RETURN_NAMES = ("LORA_STACK",)
FUNCTION = "combine_stacks"
def combine_stacks(self, lora_stack_a, lora_stack_b):
combined_stack = []
if lora_stack_a:
combined_stack.extend(lora_stack_a)
if lora_stack_b:
combined_stack.extend(lora_stack_b)
return (combined_stack,)

570
py/nodes/nunchaku_qwen.py Normal file
View File

@@ -0,0 +1,570 @@
from __future__ import annotations
"""Qwen-Image LoRA support for Nunchaku models.
Portions of the LoRA mapping/application logic in this file are adapted from
ComfyUI-QwenImageLoraLoader by GitHub user ussoewwin:
https://github.com/ussoewwin/ComfyUI-QwenImageLoraLoader
The upstream project is licensed under Apache License 2.0.
"""
import copy
import logging
import os
import re
from collections import defaultdict
from pathlib import Path
from typing import Dict, List, Optional, Tuple, Union
import comfy.utils # type: ignore
import folder_paths # type: ignore
import torch
import torch.nn as nn
from safetensors import safe_open
from nunchaku.lora.flux.nunchaku_converter import (
pack_lowrank_weight,
unpack_lowrank_weight,
)
logger = logging.getLogger(__name__)
KEY_MAPPING = [
(re.compile(r"^(layers)[._](\d+)[._]attention[._]to[._]([qkv])$"), r"\1.\2.attention.to_qkv", "qkv", lambda m: m.group(3).upper()),
(re.compile(r"^(layers)[._](\d+)[._]feed_forward[._](w1|w3)$"), r"\1.\2.feed_forward.net.0.proj", "glu", lambda m: m.group(3)),
(re.compile(r"^(layers)[._](\d+)[._]feed_forward[._]w2$"), r"\1.\2.feed_forward.net.2", "regular", None),
(re.compile(r"^(layers)[._](\d+)[._](.*)$"), r"\1.\2.\3", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]to[._]([qkv])$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._](q|k|v)[._]proj$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]add[._](q|k|v)[._]proj$"), r"\1.\2.attn.add_qkv_proj", "add_qkv", lambda m: m.group(3).upper()),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]out[._]proj[._]context$"), r"\1.\2.attn.to_add_out", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]out[._]proj$"), r"\1.\2.attn.to_out.0", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]to[._]out$"), r"\1.\2.attn.to_out.0", "regular", None),
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]attn[._]to[._]([qkv])$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]attn[._]to[._]out$"), r"\1.\2.attn.to_out", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff[._]net[._]0(?:[._]proj)?$"), r"\1.\2.mlp_fc1", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff[._]net[._]2$"), r"\1.\2.mlp_fc2", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff_context[._]net[._]0(?:[._]proj)?$"), r"\1.\2.mlp_context_fc1", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff_context[._]net[._]2$"), r"\1.\2.mlp_context_fc2", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mlp)[._](net)[._](0)[._](proj)$"), r"\1.\2.\3.\4.\5.\6", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mlp)[._](net)[._](2)$"), r"\1.\2.\3.\4.\5", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mlp)[._](net)[._](0)[._](proj)$"), r"\1.\2.\3.\4.\5.\6", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mlp)[._](net)[._](2)$"), r"\1.\2.\3.\4.\5", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mod)[._](1)$"), r"\1.\2.\3.\4", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mod)[._](1)$"), r"\1.\2.\3.\4", "regular", None),
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]proj[._]out$"), r"\1.\2.proj_out", "single_proj_out", None),
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]proj[._]mlp$"), r"\1.\2.mlp_fc1", "regular", None),
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]norm[._]linear$"), r"\1.\2.norm.linear", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]norm1[._]linear$"), r"\1.\2.norm1.linear", "regular", None),
(re.compile(r"^(transformer_blocks)[._](\d+)[._]norm1_context[._]linear$"), r"\1.\2.norm1_context.linear", "regular", None),
(re.compile(r"^(img_in)$"), r"\1", "regular", None),
(re.compile(r"^(txt_in)$"), r"\1", "regular", None),
(re.compile(r"^(proj_out)$"), r"\1", "regular", None),
(re.compile(r"^(norm_out)[._](linear)$"), r"\1.\2", "regular", None),
(re.compile(r"^(time_text_embed)[._](timestep_embedder)[._](linear_1)$"), r"\1.\2.\3", "regular", None),
(re.compile(r"^(time_text_embed)[._](timestep_embedder)[._](linear_2)$"), r"\1.\2.\3", "regular", None),
]
_RE_LORA_SUFFIX = re.compile(r"\.(?P<tag>lora(?:[._](?:A|B|down|up)))(?:\.[^.]+)*\.weight$")
_RE_ALPHA_SUFFIX = re.compile(r"\.(?:alpha|lora_alpha)(?:\.[^.]+)*$")
def _rename_layer_underscore_layer_name(old_name: str) -> str:
rules = [
(r"_(\d+)_attn_to_out_(\d+)", r".\1.attn.to_out.\2"),
(r"_(\d+)_img_mlp_net_(\d+)_proj", r".\1.img_mlp.net.\2.proj"),
(r"_(\d+)_txt_mlp_net_(\d+)_proj", r".\1.txt_mlp.net.\2.proj"),
(r"_(\d+)_img_mlp_net_(\d+)", r".\1.img_mlp.net.\2"),
(r"_(\d+)_txt_mlp_net_(\d+)", r".\1.txt_mlp.net.\2"),
(r"_(\d+)_img_mod_(\d+)", r".\1.img_mod.\2"),
(r"_(\d+)_txt_mod_(\d+)", r".\1.txt_mod.\2"),
(r"_(\d+)_attn_", r".\1.attn."),
]
new_name = old_name
for pattern, replacement in rules:
new_name = re.sub(pattern, replacement, new_name)
return new_name
def _is_indexable_module(module):
return isinstance(module, (nn.ModuleList, nn.Sequential, list, tuple))
def _get_module_by_name(model: nn.Module, name: str) -> Optional[nn.Module]:
if not name:
return model
module = model
for part in name.split("."):
if not part:
continue
if hasattr(module, part):
module = getattr(module, part)
elif part.isdigit() and _is_indexable_module(module):
try:
module = module[int(part)]
except (IndexError, TypeError):
return None
else:
return None
return module
def _resolve_module_name(model: nn.Module, name: str) -> Tuple[str, Optional[nn.Module]]:
module = _get_module_by_name(model, name)
if module is not None:
return name, module
replacements = [
(".attn.to_out.0", ".attn.to_out"),
(".attention.to_qkv", ".attention.qkv"),
(".attention.to_out.0", ".attention.out"),
(".feed_forward.net.0.proj", ".feed_forward.w13"),
(".feed_forward.net.2", ".feed_forward.w2"),
(".ff.net.0.proj", ".mlp_fc1"),
(".ff.net.2", ".mlp_fc2"),
(".ff_context.net.0.proj", ".mlp_context_fc1"),
(".ff_context.net.2", ".mlp_context_fc2"),
]
for src, dst in replacements:
if src in name:
alt = name.replace(src, dst)
module = _get_module_by_name(model, alt)
if module is not None:
return alt, module
return name, None
def _classify_and_map_key(key: str) -> Optional[Tuple[str, str, Optional[str], str]]:
normalized = key
if normalized.startswith("transformer."):
normalized = normalized[len("transformer."):]
if normalized.startswith("diffusion_model."):
normalized = normalized[len("diffusion_model."):]
if normalized.startswith("lora_unet_"):
normalized = _rename_layer_underscore_layer_name(normalized[len("lora_unet_"):])
match = _RE_LORA_SUFFIX.search(normalized)
if match:
tag = match.group("tag")
base = normalized[:match.start()]
ab = "A" if ("lora_A" in tag or tag.endswith(".A") or "down" in tag) else "B"
else:
match = _RE_ALPHA_SUFFIX.search(normalized)
if not match:
return None
base = normalized[:match.start()]
ab = "alpha"
for pattern, template, group, comp_fn in KEY_MAPPING:
key_match = pattern.match(base)
if key_match:
return group, key_match.expand(template), comp_fn(key_match) if comp_fn else None, ab
return None
def _detect_lora_format(lora_state_dict: Dict[str, torch.Tensor]) -> bool:
standard_patterns = (
".lora_up.",
".lora_down.",
".lora_A.",
".lora_B.",
".lora.up.",
".lora.down.",
".lora.A.",
".lora.B.",
)
return any(pattern in key for key in lora_state_dict for pattern in standard_patterns)
def _load_lora_state_dict(path_or_dict: Union[str, Path, Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
if isinstance(path_or_dict, dict):
return path_or_dict
path = Path(path_or_dict)
if path.suffix == ".safetensors":
state_dict: Dict[str, torch.Tensor] = {}
with safe_open(path, framework="pt", device="cpu") as handle:
for key in handle.keys():
state_dict[key] = handle.get_tensor(key)
return state_dict
return comfy.utils.load_torch_file(str(path), safe_load=True)
def _fuse_glu_lora(glu_weights: Dict[str, torch.Tensor]) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
if "w1_A" not in glu_weights or "w3_A" not in glu_weights:
return None, None, None
a_w1, b_w1 = glu_weights["w1_A"], glu_weights["w1_B"]
a_w3, b_w3 = glu_weights["w3_A"], glu_weights["w3_B"]
if a_w1.shape[1] != a_w3.shape[1]:
return None, None, None
a_fused = torch.cat([a_w1, a_w3], dim=0)
out1, out3 = b_w1.shape[0], b_w3.shape[0]
rank1, rank3 = b_w1.shape[1], b_w3.shape[1]
b_fused = torch.zeros(out1 + out3, rank1 + rank3, dtype=b_w1.dtype, device=b_w1.device)
b_fused[:out1, :rank1] = b_w1
b_fused[out1:, rank1:] = b_w3
return a_fused, b_fused, glu_weights.get("w1_alpha")
def _fuse_qkv_lora(qkv_weights: Dict[str, torch.Tensor], model: Optional[nn.Module] = None, base_key: Optional[str] = None) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
required_keys = ["Q_A", "Q_B", "K_A", "K_B", "V_A", "V_B"]
if not all(key in qkv_weights for key in required_keys):
return None, None, None
a_q, a_k, a_v = qkv_weights["Q_A"], qkv_weights["K_A"], qkv_weights["V_A"]
b_q, b_k, b_v = qkv_weights["Q_B"], qkv_weights["K_B"], qkv_weights["V_B"]
if not (a_q.shape == a_k.shape == a_v.shape):
return None, None, None
if not (b_q.shape[1] == b_k.shape[1] == b_v.shape[1]):
return None, None, None
out_features = None
if model is not None and base_key is not None:
_, module = _resolve_module_name(model, base_key)
out_features = getattr(module, "out_features", None) if module is not None else None
alpha_fused = None
alpha_q = qkv_weights.get("Q_alpha")
alpha_k = qkv_weights.get("K_alpha")
alpha_v = qkv_weights.get("V_alpha")
if alpha_q is not None and alpha_k is not None and alpha_v is not None and alpha_q.item() == alpha_k.item() == alpha_v.item():
alpha_fused = alpha_q
a_fused = torch.cat([a_q, a_k, a_v], dim=0)
rank = b_q.shape[1]
out_q, out_k, out_v = b_q.shape[0], b_k.shape[0], b_v.shape[0]
total_out = out_features if out_features is not None else out_q + out_k + out_v
b_fused = torch.zeros(total_out, 3 * rank, dtype=b_q.dtype, device=b_q.device)
b_fused[:out_q, :rank] = b_q
b_fused[out_q:out_q + out_k, rank:2 * rank] = b_k
b_fused[out_q + out_k:out_q + out_k + out_v, 2 * rank:] = b_v
return a_fused, b_fused, alpha_fused
def _handle_proj_out_split(lora_dict: Dict[str, Dict[str, torch.Tensor]], base_key: str, model: nn.Module) -> Tuple[Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]], List[str]]:
result: Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]] = {}
consumed: List[str] = []
match = re.search(r"single_transformer_blocks\.(\d+)", base_key)
if not match or base_key not in lora_dict:
return result, consumed
block_idx = match.group(1)
block = _get_module_by_name(model, f"single_transformer_blocks.{block_idx}")
if block is None:
return result, consumed
a_full = lora_dict[base_key].get("A")
b_full = lora_dict[base_key].get("B")
alpha = lora_dict[base_key].get("alpha")
attn_to_out = getattr(getattr(block, "attn", None), "to_out", None)
mlp_fc2 = getattr(block, "mlp_fc2", None)
if a_full is None or b_full is None or attn_to_out is None or mlp_fc2 is None:
return result, consumed
attn_in = getattr(attn_to_out, "in_features", None)
mlp_in = getattr(mlp_fc2, "in_features", None)
if attn_in is None or mlp_in is None or a_full.shape[1] != attn_in + mlp_in:
return result, consumed
result[f"single_transformer_blocks.{block_idx}.attn.to_out"] = (a_full[:, :attn_in], b_full.clone(), alpha)
result[f"single_transformer_blocks.{block_idx}.mlp_fc2"] = (a_full[:, attn_in:], b_full.clone(), alpha)
consumed.append(base_key)
return result, consumed
def _apply_lora_to_module(module: nn.Module, a_tensor: torch.Tensor, b_tensor: torch.Tensor, module_name: str, model: nn.Module) -> None:
if not hasattr(module, "in_features") or not hasattr(module, "out_features"):
raise ValueError(f"{module_name}: unsupported module without in/out features")
if a_tensor.shape[1] != module.in_features or b_tensor.shape[0] != module.out_features:
raise ValueError(f"{module_name}: LoRA shape mismatch")
if module.__class__.__name__ == "AWQW4A16Linear" and hasattr(module, "qweight"):
if not hasattr(module, "_lora_original_forward"):
module._lora_original_forward = module.forward
if not hasattr(module, "_nunchaku_lora_bundle"):
module._nunchaku_lora_bundle = []
module._nunchaku_lora_bundle.append((a_tensor, b_tensor))
def _awq_lora_forward(x, *args, **kwargs):
out = module._lora_original_forward(x, *args, **kwargs)
x_flat = x.reshape(-1, module.in_features)
for local_a, local_b in module._nunchaku_lora_bundle:
local_a = local_a.to(device=out.device, dtype=out.dtype)
local_b = local_b.to(device=out.device, dtype=out.dtype)
lora_term = (x_flat @ local_a.transpose(0, 1)) @ local_b.transpose(0, 1)
try:
out = out + lora_term.reshape(out.shape)
except Exception:
pass
return out
module.forward = _awq_lora_forward
if not hasattr(model, "_lora_slots"):
model._lora_slots = {}
model._lora_slots[module_name] = {"type": "awq_w4a16"}
return
if hasattr(module, "proj_down") and hasattr(module, "proj_up"):
proj_down = unpack_lowrank_weight(module.proj_down.data, down=True)
proj_up = unpack_lowrank_weight(module.proj_up.data, down=False)
base_rank = proj_down.shape[0] if proj_down.shape[1] == module.in_features else proj_down.shape[1]
if proj_down.shape[1] == module.in_features:
updated_down = torch.cat([proj_down, a_tensor], dim=0)
axis_down = 0
else:
updated_down = torch.cat([proj_down, a_tensor.T], dim=1)
axis_down = 1
updated_up = torch.cat([proj_up, b_tensor], dim=1)
module.proj_down.data = pack_lowrank_weight(updated_down, down=True)
module.proj_up.data = pack_lowrank_weight(updated_up, down=False)
module.rank = base_rank + a_tensor.shape[0]
if not hasattr(model, "_lora_slots"):
model._lora_slots = {}
model._lora_slots[module_name] = {
"type": "nunchaku",
"base_rank": base_rank,
"axis_down": axis_down,
}
return
if isinstance(module, nn.Linear):
if not hasattr(model, "_lora_slots"):
model._lora_slots = {}
if module_name not in model._lora_slots:
model._lora_slots[module_name] = {
"type": "linear",
"original_weight": module.weight.detach().cpu().clone(),
}
module.weight.data.add_((b_tensor @ a_tensor).to(dtype=module.weight.dtype, device=module.weight.device))
return
raise ValueError(f"{module_name}: unsupported module type {type(module)}")
def reset_lora_v2(model: nn.Module) -> None:
slots = getattr(model, "_lora_slots", None)
if not slots:
return
for name, info in list(slots.items()):
module = _get_module_by_name(model, name)
if module is None:
continue
module_type = info.get("type", "nunchaku")
if module_type == "nunchaku":
base_rank = info["base_rank"]
proj_down = unpack_lowrank_weight(module.proj_down.data, down=True)
proj_up = unpack_lowrank_weight(module.proj_up.data, down=False)
if info.get("axis_down", 0) == 0:
proj_down = proj_down[:base_rank, :].clone()
else:
proj_down = proj_down[:, :base_rank].clone()
proj_up = proj_up[:, :base_rank].clone()
module.proj_down.data = pack_lowrank_weight(proj_down, down=True)
module.proj_up.data = pack_lowrank_weight(proj_up, down=False)
module.rank = base_rank
elif module_type == "linear" and "original_weight" in info:
module.weight.data.copy_(info["original_weight"].to(device=module.weight.device, dtype=module.weight.dtype))
elif module_type == "awq_w4a16":
if hasattr(module, "_lora_original_forward"):
module.forward = module._lora_original_forward
for attr in ("_lora_original_forward", "_nunchaku_lora_bundle"):
if hasattr(module, attr):
delattr(module, attr)
model._lora_slots = {}
def compose_loras_v2(model: nn.Module, lora_configs: List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]], apply_awq_mod: bool = True) -> bool:
del apply_awq_mod # retained for interface compatibility
reset_lora_v2(model)
aggregated_weights: Dict[str, List[Dict[str, object]]] = defaultdict(list)
saw_supported_format = False
unresolved_targets = 0
for index, (path_or_dict, strength) in enumerate(lora_configs):
if abs(strength) < 1e-5:
continue
lora_name = str(path_or_dict) if not isinstance(path_or_dict, dict) else f"lora_{index}"
lora_state_dict = _load_lora_state_dict(path_or_dict)
if not lora_state_dict or not _detect_lora_format(lora_state_dict):
logger.warning("Skipping unsupported Qwen LoRA: %s", lora_name)
continue
saw_supported_format = True
grouped_weights: Dict[str, Dict[str, torch.Tensor]] = defaultdict(dict)
for key, value in lora_state_dict.items():
parsed = _classify_and_map_key(key)
if parsed is None:
continue
group, base_key, component, ab = parsed
if component and ab:
grouped_weights[base_key][f"{component}_{ab}"] = value
else:
grouped_weights[base_key][ab] = value
processed_groups: Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]] = {}
handled: set[str] = set()
for base_key, weights in grouped_weights.items():
if base_key in handled:
continue
a_tensor = b_tensor = alpha = None
if "qkv" in base_key or "add_qkv_proj" in base_key:
a_tensor, b_tensor, alpha = _fuse_qkv_lora(weights, model=model, base_key=base_key)
elif "w1_A" in weights or "w3_A" in weights:
a_tensor, b_tensor, alpha = _fuse_glu_lora(weights)
elif ".proj_out" in base_key and "single_transformer_blocks" in base_key:
split_map, consumed = _handle_proj_out_split(grouped_weights, base_key, model)
processed_groups.update(split_map)
handled.update(consumed)
continue
else:
a_tensor, b_tensor, alpha = weights.get("A"), weights.get("B"), weights.get("alpha")
if a_tensor is not None and b_tensor is not None:
processed_groups[base_key] = (a_tensor, b_tensor, alpha)
for module_name, (a_tensor, b_tensor, alpha) in processed_groups.items():
aggregated_weights[module_name].append({
"A": a_tensor,
"B": b_tensor,
"alpha": alpha,
"strength": strength,
})
for module_name, weight_list in aggregated_weights.items():
resolved_name, module = _resolve_module_name(model, module_name)
if module is None:
logger.warning("Skipping unresolved Qwen LoRA target: %s", module_name)
unresolved_targets += 1
continue
all_a = []
all_b_scaled = []
for item in weight_list:
a_tensor = item["A"]
b_tensor = item["B"]
alpha = item["alpha"]
strength = float(item["strength"])
rank = a_tensor.shape[0]
scale = strength * ((alpha / rank) if alpha is not None else 1.0)
if module.__class__.__name__ == "AWQW4A16Linear" and hasattr(module, "qweight"):
target_dtype = torch.float16
target_device = module.qweight.device
elif hasattr(module, "proj_down"):
target_dtype = module.proj_down.dtype
target_device = module.proj_down.device
elif hasattr(module, "weight"):
target_dtype = module.weight.dtype
target_device = module.weight.device
else:
target_dtype = torch.float16
target_device = "cuda" if torch.cuda.is_available() else "cpu"
all_a.append(a_tensor.to(dtype=target_dtype, device=target_device))
all_b_scaled.append((b_tensor * scale).to(dtype=target_dtype, device=target_device))
if not all_a:
continue
_apply_lora_to_module(module, torch.cat(all_a, dim=0), torch.cat(all_b_scaled, dim=1), resolved_name, model)
slot_count = len(getattr(model, "_lora_slots", {}) or {})
logger.info(
"Qwen LoRA composition finished: requested=%d supported=%s applied_targets=%d unresolved=%d",
len(lora_configs),
saw_supported_format,
slot_count,
unresolved_targets,
)
return saw_supported_format
class ComfyQwenImageWrapperLM(nn.Module):
def __init__(self, model: nn.Module, config=None, apply_awq_mod: bool = True):
super().__init__()
self.model = model
self.config = {} if config is None else config
self.dtype = next(model.parameters()).dtype
self.loras: List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]] = []
self._applied_loras: Optional[List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]]] = None
self.apply_awq_mod = apply_awq_mod
def __getattr__(self, name):
try:
inner = object.__getattribute__(self, "_modules").get("model")
except (AttributeError, KeyError):
inner = None
if inner is None:
raise AttributeError(f"{type(self).__name__!s} has no attribute {name}")
if name == "model":
return inner
return getattr(inner, name)
def process_img(self, *args, **kwargs):
return self.model.process_img(*args, **kwargs)
def _ensure_composed(self):
if self._applied_loras != self.loras or (not self.loras and getattr(self.model, "_lora_slots", None)):
is_supported_format = compose_loras_v2(self.model, self.loras, apply_awq_mod=self.apply_awq_mod)
self._applied_loras = self.loras.copy()
has_slots = bool(getattr(self.model, "_lora_slots", None))
if self.loras and is_supported_format and not has_slots:
logger.warning("Qwen LoRA compose produced 0 target modules. Resetting and retrying once.")
reset_lora_v2(self.model)
compose_loras_v2(self.model, self.loras, apply_awq_mod=self.apply_awq_mod)
has_slots = bool(getattr(self.model, "_lora_slots", None))
logger.info("Qwen LoRA retry result: applied_targets=%d", len(getattr(self.model, "_lora_slots", {}) or {}))
offload_manager = getattr(self.model, "offload_manager", None)
if offload_manager is not None:
offload_settings = {
"num_blocks_on_gpu": getattr(offload_manager, "num_blocks_on_gpu", 1),
"use_pin_memory": getattr(offload_manager, "use_pin_memory", False),
}
logger.info(
"Rebuilding Qwen offload manager after LoRA compose: num_blocks_on_gpu=%s use_pin_memory=%s",
offload_settings["num_blocks_on_gpu"],
offload_settings["use_pin_memory"],
)
self.model.set_offload(False)
self.model.set_offload(True, **offload_settings)
def forward(self, *args, **kwargs):
self._ensure_composed()
return self.model(*args, **kwargs)
def _get_qwen_wrapper_and_transformer(model):
model_wrapper = model.model.diffusion_model
if hasattr(model_wrapper, "model") and hasattr(model_wrapper, "loras"):
transformer = model_wrapper.model
if transformer.__class__.__name__.endswith("NunchakuQwenImageTransformer2DModel"):
return model_wrapper, transformer
if model_wrapper.__class__.__name__.endswith("NunchakuQwenImageTransformer2DModel"):
wrapped_model = ComfyQwenImageWrapperLM(model_wrapper, getattr(model_wrapper, "config", {}))
model.model.diffusion_model = wrapped_model
return wrapped_model, wrapped_model.model
raise TypeError(f"This LoRA loader only works with Nunchaku Qwen Image models, but got {type(model_wrapper).__name__}.")
def nunchaku_load_qwen_loras(model, lora_configs: List[Tuple[str, float]], apply_awq_mod: bool = True):
model_wrapper, transformer = _get_qwen_wrapper_and_transformer(model)
model_wrapper.apply_awq_mod = apply_awq_mod
saved_config = None
if hasattr(model, "model") and hasattr(model.model, "model_config"):
saved_config = model.model.model_config
model.model.model_config = None
model_wrapper.model = None
try:
ret_model = copy.deepcopy(model)
finally:
if saved_config is not None:
model.model.model_config = saved_config
model_wrapper.model = transformer
ret_model_wrapper = ret_model.model.diffusion_model
if saved_config is not None:
ret_model.model.model_config = saved_config
ret_model_wrapper.model = transformer
ret_model_wrapper.apply_awq_mod = apply_awq_mod
ret_model_wrapper.loras = list(getattr(model_wrapper, "loras", []))
for lora_name, lora_strength in lora_configs:
lora_path = lora_name if os.path.isfile(lora_name) else folder_paths.get_full_path("loras", lora_name)
if not lora_path or not os.path.isfile(lora_path):
logger.warning("Skipping Qwen LoRA '%s' because it could not be found", lora_name)
continue
ret_model_wrapper.loras.append((lora_path, lora_strength))
return ret_model

View File

@@ -1,15 +1,38 @@
from __future__ import annotations
from typing import Any
import inspect
from ..services.wildcard_service import (
contains_dynamic_syntax,
get_wildcard_service,
is_trigger_words_input,
)
class _AllContainer:
"""Container that accepts any key for dynamic input validation."""
def __contains__(self, item):
return True
class _PromptOptionalInputs:
"""Lookup that preserves explicit optional inputs and dynamic trigger slots."""
def __getitem__(self, key):
return ("STRING", {"forceInput": True})
def __init__(self, explicit_inputs: dict[str, tuple[str, dict[str, Any]]]) -> None:
self._explicit_inputs = explicit_inputs
def __contains__(self, item: object) -> bool:
if not isinstance(item, str):
return False
return item in self._explicit_inputs or is_trigger_words_input(item)
def __getitem__(self, key: str) -> tuple[str, dict[str, Any]]:
if key in self._explicit_inputs:
return self._explicit_inputs[key]
if is_trigger_words_input(key):
return (
"STRING",
{
"forceInput": True,
"tooltip": "Trigger words to prepend. Connect to add more inputs.",
},
)
raise KeyError(key)
class PromptLM:
@@ -20,12 +43,19 @@ class PromptLM:
DESCRIPTION = (
"Encodes a text prompt using a CLIP model into an embedding that can be used "
"to guide the diffusion model towards generating specific images. "
"Supports dynamic trigger words inputs."
"Supports dynamic trigger words inputs and runtime wildcard expansion."
)
@classmethod
def INPUT_TYPES(cls):
dyn_inputs = {
optional_inputs: dict[str, tuple[str, dict[str, Any]]] = {
"seed": (
"INT",
{
"forceInput": True,
"tooltip": "Optional seed for wildcard generation. Leave unconnected for non-deterministic wildcard expansion.",
},
),
"trigger_words1": (
"STRING",
{
@@ -35,10 +65,9 @@ class PromptLM:
),
}
# Bypass validation for dynamic inputs during graph execution
stack = inspect.stack()
if len(stack) > 2 and stack[2].function == "get_input_info":
dyn_inputs = _AllContainer()
optional_inputs = _PromptOptionalInputs(optional_inputs) # type: ignore[assignment]
return {
"required": {
@@ -46,8 +75,8 @@ class PromptLM:
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter prompt... /char, /artist for quick tag search",
"tooltip": "The text to be encoded.",
"placeholder": "Enter prompt... /character, /artist, /wildcard for quick search",
"tooltip": "The text to be encoded. Wildcard references inserted with /wildcard are expanded at runtime.",
},
),
"clip": (
@@ -55,7 +84,7 @@ class PromptLM:
{"tooltip": "The CLIP model used for encoding the text."},
),
},
"optional": dyn_inputs,
"optional": optional_inputs,
}
RETURN_TYPES = ("CONDITIONING", "STRING")
@@ -65,18 +94,37 @@ class PromptLM:
)
FUNCTION = "encode"
def encode(self, text: str, clip: Any, **kwargs):
# Collect all trigger words from dynamic inputs
@classmethod
def IS_CHANGED(
cls,
text: str,
clip: Any | None = None,
seed: int | None = None,
**kwargs: Any,
):
del clip, kwargs
if contains_dynamic_syntax(text) and seed is None:
return float("NaN")
return False
def encode(
self,
text: str,
clip: Any,
seed: int | None = None,
**kwargs: Any,
):
expanded_text = get_wildcard_service().expand_text(text, seed=seed)
trigger_words = []
for key, value in kwargs.items():
if key.startswith("trigger_words") and value:
if is_trigger_words_input(key) and value:
trigger_words.append(value)
# Build final prompt
if trigger_words:
prompt = ", ".join(trigger_words + [text])
prompt = ", ".join(trigger_words + [expanded_text])
else:
prompt = text
prompt = expanded_text
from nodes import CLIPTextEncode # type: ignore

View File

@@ -1,17 +1,24 @@
import json
import os
import re
import time
import uuid
from typing import Any, Dict, Optional
import numpy as np
import folder_paths # type: ignore
import folder_paths # type: ignore
from ..services.service_registry import ServiceRegistry
from ..metadata_collector.metadata_processor import MetadataProcessor
from ..metadata_collector import get_metadata
from ..utils.constants import CARD_PREVIEW_WIDTH
from ..utils.exif_utils import ExifUtils
from ..utils.utils import calculate_recipe_fingerprint
from PIL import Image, PngImagePlugin
import piexif
import logging
logger = logging.getLogger(__name__)
class SaveImageLM:
NAME = "Save Image (LoraManager)"
CATEGORY = "Lora Manager/utils"
@@ -32,33 +39,65 @@ class SaveImageLM:
return {
"required": {
"images": ("IMAGE",),
"filename_prefix": ("STRING", {
"default": "ComfyUI",
"tooltip": "Base filename for saved images. Supports format patterns like %seed%, %width%, %height%, %model%, etc."
}),
"file_format": (["png", "jpeg", "webp"], {
"tooltip": "Image format to save as. PNG preserves quality, JPEG is smaller, WebP balances size and quality."
}),
"filename_prefix": (
"STRING",
{
"default": "ComfyUI",
"tooltip": "Base filename for saved images. Supports format patterns like %seed%, %width%, %height%, %model%, etc.",
},
),
"file_format": (
["png", "jpeg", "webp"],
{
"tooltip": "Image format to save as. PNG preserves quality, JPEG is smaller, WebP balances size and quality."
},
),
},
"optional": {
"lossless_webp": ("BOOLEAN", {
"default": False,
"tooltip": "When enabled, saves WebP images with lossless compression. Results in larger files but no quality loss."
}),
"quality": ("INT", {
"default": 100,
"min": 1,
"max": 100,
"tooltip": "Compression quality for JPEG and lossy WebP formats (1-100). Higher values mean better quality but larger files."
}),
"embed_workflow": ("BOOLEAN", {
"default": False,
"tooltip": "Embeds the complete workflow data into the image metadata. Only works with PNG and WebP formats."
}),
"add_counter_to_filename": ("BOOLEAN", {
"default": True,
"tooltip": "Adds an incremental counter to filenames to prevent overwriting previous images."
}),
"lossless_webp": (
"BOOLEAN",
{
"default": False,
"tooltip": "When enabled, saves WebP images with lossless compression. Results in larger files but no quality loss.",
},
),
"quality": (
"INT",
{
"default": 100,
"min": 1,
"max": 100,
"tooltip": "Compression quality for JPEG and lossy WebP formats (1-100). Higher values mean better quality but larger files.",
},
),
"embed_workflow": (
"BOOLEAN",
{
"default": False,
"tooltip": "Embeds the complete workflow data into the image metadata. Only works with PNG and WebP formats.",
},
),
"save_with_metadata": (
"BOOLEAN",
{
"default": True,
"tooltip": "When enabled, embeds generation parameters into the saved image metadata. Disable to skip writing generation metadata.",
},
),
"add_counter_to_filename": (
"BOOLEAN",
{
"default": True,
"tooltip": "Adds an incremental counter to filenames to prevent overwriting previous images.",
},
),
"save_as_recipe": (
"BOOLEAN",
{
"default": False,
"tooltip": "Also saves each generated image as a LoRA Manager recipe.",
},
),
},
"hidden": {
"id": "UNIQUE_ID",
@@ -77,9 +116,10 @@ class SaveImageLM:
scanner = ServiceRegistry.get_service_sync("lora_scanner")
# Use the new direct filename lookup method
hash_value = scanner.get_hash_by_filename(lora_name)
if hash_value:
return hash_value
if scanner is not None:
hash_value = scanner.get_hash_by_filename(lora_name)
if hash_value:
return hash_value
return None
@@ -95,9 +135,10 @@ class SaveImageLM:
checkpoint_name = os.path.splitext(checkpoint_name)[0]
# Try direct filename lookup first
hash_value = scanner.get_hash_by_filename(checkpoint_name)
if hash_value:
return hash_value
if scanner is not None:
hash_value = scanner.get_hash_by_filename(checkpoint_name)
if hash_value:
return hash_value
return None
@@ -112,11 +153,11 @@ class SaveImageLM:
param_list.append(f"{label}: {value}")
# Extract the prompt and negative prompt
prompt = metadata_dict.get('prompt', '')
negative_prompt = metadata_dict.get('negative_prompt', '')
prompt = metadata_dict.get("prompt", "")
negative_prompt = metadata_dict.get("negative_prompt", "")
# Extract loras from the prompt if present
loras_text = metadata_dict.get('loras', '')
loras_text = metadata_dict.get("loras", "")
lora_hashes = {}
# If loras are found, add them on a new line after the prompt
@@ -124,7 +165,7 @@ class SaveImageLM:
prompt_with_loras = f"{prompt}\n{loras_text}"
# Extract lora names from the format <lora:name:strength>
lora_matches = re.findall(r'<lora:([^:]+):([^>]+)>', loras_text)
lora_matches = re.findall(r"<lora:([^:]+):([^>]+)>", loras_text)
# Get hash for each lora
for lora_name, strength in lora_matches:
@@ -145,43 +186,43 @@ class SaveImageLM:
params = []
# Add standard parameters in the correct order
if 'steps' in metadata_dict:
add_param_if_not_none(params, "Steps", metadata_dict.get('steps'))
if "steps" in metadata_dict:
add_param_if_not_none(params, "Steps", metadata_dict.get("steps"))
# Combine sampler and scheduler information
sampler_name = None
scheduler_name = None
if 'sampler' in metadata_dict:
sampler = metadata_dict.get('sampler')
if "sampler" in metadata_dict:
sampler = metadata_dict.get("sampler")
# Convert ComfyUI sampler names to user-friendly names
sampler_mapping = {
'euler': 'Euler',
'euler_ancestral': 'Euler a',
'dpm_2': 'DPM2',
'dpm_2_ancestral': 'DPM2 a',
'heun': 'Heun',
'dpm_fast': 'DPM fast',
'dpm_adaptive': 'DPM adaptive',
'lms': 'LMS',
'dpmpp_2s_ancestral': 'DPM++ 2S a',
'dpmpp_sde': 'DPM++ SDE',
'dpmpp_sde_gpu': 'DPM++ SDE',
'dpmpp_2m': 'DPM++ 2M',
'dpmpp_2m_sde': 'DPM++ 2M SDE',
'dpmpp_2m_sde_gpu': 'DPM++ 2M SDE',
'ddim': 'DDIM'
"euler": "Euler",
"euler_ancestral": "Euler a",
"dpm_2": "DPM2",
"dpm_2_ancestral": "DPM2 a",
"heun": "Heun",
"dpm_fast": "DPM fast",
"dpm_adaptive": "DPM adaptive",
"lms": "LMS",
"dpmpp_2s_ancestral": "DPM++ 2S a",
"dpmpp_sde": "DPM++ SDE",
"dpmpp_sde_gpu": "DPM++ SDE",
"dpmpp_2m": "DPM++ 2M",
"dpmpp_2m_sde": "DPM++ 2M SDE",
"dpmpp_2m_sde_gpu": "DPM++ 2M SDE",
"ddim": "DDIM",
}
sampler_name = sampler_mapping.get(sampler, sampler)
if 'scheduler' in metadata_dict:
scheduler = metadata_dict.get('scheduler')
if "scheduler" in metadata_dict:
scheduler = metadata_dict.get("scheduler")
scheduler_mapping = {
'normal': 'Simple',
'karras': 'Karras',
'exponential': 'Exponential',
'sgm_uniform': 'SGM Uniform',
'sgm_quadratic': 'SGM Quadratic'
"normal": "Simple",
"karras": "Karras",
"exponential": "Exponential",
"sgm_uniform": "SGM Uniform",
"sgm_quadratic": "SGM Quadratic",
}
scheduler_name = scheduler_mapping.get(scheduler, scheduler)
@@ -193,25 +234,25 @@ class SaveImageLM:
params.append(f"Sampler: {sampler_name}")
# CFG scale (Use guidance if available, otherwise fall back to cfg_scale or cfg)
if 'guidance' in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get('guidance'))
elif 'cfg_scale' in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get('cfg_scale'))
elif 'cfg' in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get('cfg'))
if "guidance" in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get("guidance"))
elif "cfg_scale" in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get("cfg_scale"))
elif "cfg" in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get("cfg"))
# Seed
if 'seed' in metadata_dict:
add_param_if_not_none(params, "Seed", metadata_dict.get('seed'))
if "seed" in metadata_dict:
add_param_if_not_none(params, "Seed", metadata_dict.get("seed"))
# Size
if 'size' in metadata_dict:
add_param_if_not_none(params, "Size", metadata_dict.get('size'))
if "size" in metadata_dict:
add_param_if_not_none(params, "Size", metadata_dict.get("size"))
# Model info
if 'checkpoint' in metadata_dict:
if "checkpoint" in metadata_dict:
# Ensure checkpoint is a string before processing
checkpoint = metadata_dict.get('checkpoint')
checkpoint = metadata_dict.get("checkpoint")
if checkpoint is not None:
# Get model hash
model_hash = self.get_checkpoint_hash(checkpoint)
@@ -223,7 +264,9 @@ class SaveImageLM:
# Add model hash if available
if model_hash:
params.append(f"Model hash: {model_hash[:10]}, Model: {checkpoint_name}")
params.append(
f"Model hash: {model_hash[:10]}, Model: {checkpoint_name}"
)
else:
params.append(f"Model: {checkpoint_name}")
@@ -234,7 +277,7 @@ class SaveImageLM:
lora_hash_parts.append(f"{lora_name}: {hash_value[:10]}")
if lora_hash_parts:
params.append(f"Lora hashes: \"{', '.join(lora_hash_parts)}\"")
params.append(f'Lora hashes: "{", ".join(lora_hash_parts)}"')
# Combine all parameters with commas
metadata_parts.append(", ".join(params))
@@ -254,30 +297,30 @@ class SaveImageLM:
parts = segment.replace("%", "").split(":")
key = parts[0]
if key == "seed" and 'seed' in metadata_dict:
filename = filename.replace(segment, str(metadata_dict.get('seed', '')))
elif key == "width" and 'size' in metadata_dict:
size = metadata_dict.get('size', 'x')
w = size.split('x')[0] if isinstance(size, str) else size[0]
if key == "seed" and "seed" in metadata_dict:
filename = filename.replace(segment, str(metadata_dict.get("seed", "")))
elif key == "width" and "size" in metadata_dict:
size = metadata_dict.get("size", "x")
w = size.split("x")[0] if isinstance(size, str) else size[0]
filename = filename.replace(segment, str(w))
elif key == "height" and 'size' in metadata_dict:
size = metadata_dict.get('size', 'x')
h = size.split('x')[1] if isinstance(size, str) else size[1]
elif key == "height" and "size" in metadata_dict:
size = metadata_dict.get("size", "x")
h = size.split("x")[1] if isinstance(size, str) else size[1]
filename = filename.replace(segment, str(h))
elif key == "pprompt" and 'prompt' in metadata_dict:
prompt = metadata_dict.get('prompt', '').replace("\n", " ")
elif key == "pprompt" and "prompt" in metadata_dict:
prompt = metadata_dict.get("prompt", "").replace("\n", " ")
if len(parts) >= 2:
length = int(parts[1])
prompt = prompt[:length]
filename = filename.replace(segment, prompt.strip())
elif key == "nprompt" and 'negative_prompt' in metadata_dict:
prompt = metadata_dict.get('negative_prompt', '').replace("\n", " ")
elif key == "nprompt" and "negative_prompt" in metadata_dict:
prompt = metadata_dict.get("negative_prompt", "").replace("\n", " ")
if len(parts) >= 2:
length = int(parts[1])
prompt = prompt[:length]
filename = filename.replace(segment, prompt.strip())
elif key == "model":
model_value = metadata_dict.get('checkpoint')
model_value = metadata_dict.get("checkpoint")
if isinstance(model_value, (bytes, os.PathLike)):
model_value = str(model_value)
@@ -291,6 +334,7 @@ class SaveImageLM:
filename = filename.replace(segment, model)
elif key == "date":
from datetime import datetime
now = datetime.now()
date_table = {
"yyyy": f"{now.year:04d}",
@@ -314,8 +358,218 @@ class SaveImageLM:
return filename
def save_images(self, images, filename_prefix, file_format, id, prompt=None, extra_pnginfo=None,
lossless_webp=True, quality=100, embed_workflow=False, add_counter_to_filename=True):
@staticmethod
def _get_cached_model_by_name(scanner, name):
cache = getattr(scanner, "_cache", None)
if cache is None or not name:
return None
candidates = [
name,
os.path.basename(name),
os.path.splitext(os.path.basename(name))[0],
]
for model in getattr(cache, "raw_data", []):
file_name = model.get("file_name")
if file_name in candidates:
return model
return None
def _build_recipe_loras(self, recipe_scanner, lora_stack):
lora_matches = re.findall(r"<lora:([^:]+):([^>]+)>", lora_stack or "")
lora_scanner = getattr(recipe_scanner, "_lora_scanner", None)
loras_data = []
base_model_counts = {}
for name, strength in lora_matches:
lora_info = self._get_cached_model_by_name(lora_scanner, name)
civitai = (lora_info or {}).get("civitai") or {}
civitai_model = civitai.get("model") or {}
try:
parsed_strength = float(strength)
except (TypeError, ValueError):
parsed_strength = 1.0
loras_data.append(
{
"file_name": name,
"strength": parsed_strength,
"hash": ((lora_info or {}).get("sha256") or "").lower(),
"modelVersionId": civitai.get("id", 0),
"modelName": civitai_model.get("name", name) if lora_info else "",
"modelVersionName": civitai.get("name", "") if lora_info else "",
"isDeleted": False,
"exclude": False,
}
)
base_model = (lora_info or {}).get("base_model")
if base_model:
base_model_counts[base_model] = base_model_counts.get(base_model, 0) + 1
return lora_matches, loras_data, base_model_counts
def _build_recipe_checkpoint(self, recipe_scanner, checkpoint_raw):
if not isinstance(checkpoint_raw, str) or not checkpoint_raw.strip():
return None
checkpoint_name = checkpoint_raw.strip()
file_name = os.path.splitext(os.path.basename(checkpoint_name))[0]
checkpoint_scanner = getattr(recipe_scanner, "_checkpoint_scanner", None)
checkpoint_info = self._get_cached_model_by_name(
checkpoint_scanner, checkpoint_name
)
if not checkpoint_info:
return {
"type": "checkpoint",
"name": checkpoint_name,
"file_name": file_name,
"hash": self.get_checkpoint_hash(checkpoint_name) or "",
}
civitai = checkpoint_info.get("civitai") or {}
civitai_model = civitai.get("model") or {}
file_path = checkpoint_info.get("file_path") or checkpoint_info.get("path") or ""
cached_file_name = (
checkpoint_info.get("file_name")
or (os.path.splitext(os.path.basename(file_path))[0] if file_path else "")
or file_name
)
return {
"type": "checkpoint",
"modelId": civitai_model.get("id", 0),
"modelVersionId": civitai.get("id", 0),
"name": civitai_model.get("name")
or checkpoint_info.get("model_name")
or checkpoint_name,
"version": civitai.get("name", ""),
"hash": (
checkpoint_info.get("sha256") or checkpoint_info.get("hash") or ""
).lower(),
"file_name": cached_file_name,
"modelName": civitai_model.get("name", ""),
"modelVersionName": civitai.get("name", ""),
"baseModel": checkpoint_info.get("base_model")
or civitai.get("baseModel", ""),
}
@staticmethod
def _derive_recipe_name(lora_matches):
recipe_name_parts = [
f"{name.strip()}-{float(strength):.2f}" for name, strength in lora_matches[:3]
]
return "_".join(recipe_name_parts) or "recipe"
@staticmethod
def _sync_recipe_cache(recipe_scanner, recipe_data, json_path):
cache = getattr(recipe_scanner, "_cache", None)
if cache is not None:
cache.raw_data.append(recipe_data)
cache.sorted_by_name = sorted(
cache.raw_data, key=lambda item: item.get("title", "").lower()
)
cache.sorted_by_date = sorted(
cache.raw_data,
key=lambda item: (
item.get("modified", item.get("created_date", 0)),
item.get("file_path", ""),
),
reverse=True,
)
recipe_scanner._update_folder_metadata(cache)
recipe_scanner._update_fts_index_for_recipe(recipe_data, "add")
recipe_id = str(recipe_data.get("id", ""))
if recipe_id:
recipe_scanner._json_path_map[recipe_id] = json_path
persistent_cache = getattr(recipe_scanner, "_persistent_cache", None)
if persistent_cache:
persistent_cache.update_recipe(recipe_data, json_path)
def _save_image_as_recipe(self, file_path, metadata_dict):
if not metadata_dict:
raise ValueError("No generation metadata found")
recipe_scanner = ServiceRegistry.get_service_sync("recipe_scanner")
if recipe_scanner is None:
raise RuntimeError("Recipe scanner unavailable")
recipes_dir = recipe_scanner.recipes_dir
if not recipes_dir:
raise RuntimeError("Recipes directory unavailable")
os.makedirs(recipes_dir, exist_ok=True)
recipe_id = str(uuid.uuid4())
optimized_image, extension = ExifUtils.optimize_image(
image_data=file_path,
target_width=CARD_PREVIEW_WIDTH,
format="webp",
quality=85,
preserve_metadata=True,
)
image_path = os.path.normpath(os.path.join(recipes_dir, f"{recipe_id}{extension}"))
with open(image_path, "wb") as file_obj:
file_obj.write(optimized_image)
lora_stack = metadata_dict.get("loras", "")
lora_matches, loras_data, base_model_counts = self._build_recipe_loras(
recipe_scanner, lora_stack
)
checkpoint_entry = self._build_recipe_checkpoint(
recipe_scanner, metadata_dict.get("checkpoint")
)
most_common_base_model = (
max(base_model_counts.items(), key=lambda item: item[1])[0]
if base_model_counts
else ""
)
current_time = time.time()
recipe_data = {
"id": recipe_id,
"file_path": image_path,
"title": self._derive_recipe_name(lora_matches),
"modified": current_time,
"created_date": current_time,
"base_model": most_common_base_model
or (checkpoint_entry or {}).get("baseModel", ""),
"loras": loras_data,
"gen_params": {
key: value
for key, value in metadata_dict.items()
if key not in ["checkpoint", "loras"]
},
"loras_stack": lora_stack,
"fingerprint": calculate_recipe_fingerprint(loras_data),
}
if checkpoint_entry:
recipe_data["checkpoint"] = checkpoint_entry
json_path = os.path.normpath(
os.path.join(recipes_dir, f"{recipe_id}.recipe.json")
)
with open(json_path, "w", encoding="utf-8") as file_obj:
json.dump(recipe_data, file_obj, indent=4, ensure_ascii=False)
ExifUtils.append_recipe_metadata(image_path, recipe_data)
self._sync_recipe_cache(recipe_scanner, recipe_data, json_path)
def save_images(
self,
images,
filename_prefix,
file_format,
id,
prompt=None,
extra_pnginfo=None,
lossless_webp=True,
quality=100,
embed_workflow=False,
save_with_metadata=True,
add_counter_to_filename=True,
save_as_recipe=False,
):
"""Save images with metadata"""
results = []
@@ -329,8 +583,10 @@ class SaveImageLM:
filename_prefix = self.format_filename(filename_prefix, metadata_dict)
# Get initial save path info once for the batch
full_output_folder, filename, counter, subfolder, processed_prefix = folder_paths.get_save_image_path(
filename_prefix, self.output_dir, images[0].shape[1], images[0].shape[0]
full_output_folder, filename, counter, subfolder, processed_prefix = (
folder_paths.get_save_image_path(
filename_prefix, self.output_dir, images[0].shape[1], images[0].shape[0]
)
)
# Create directory if it doesn't exist
@@ -340,7 +596,7 @@ class SaveImageLM:
# Process each image with incrementing counter
for i, image in enumerate(images):
# Convert the tensor image to numpy array
img = 255. * image.cpu().numpy()
img = 255.0 * image.cpu().numpy()
img = Image.fromarray(np.clip(img, 0, 255).astype(np.uint8))
# Generate filename with counter if needed
@@ -351,6 +607,9 @@ class SaveImageLM:
base_filename += f"_{current_counter:05}_"
# Set file extension and prepare saving parameters
file: str
save_kwargs: Dict[str, Any]
pnginfo: Optional[PngImagePlugin.PngInfo] = None
if file_format == "png":
file = base_filename + ".png"
file_extension = ".png"
@@ -365,7 +624,13 @@ class SaveImageLM:
file = base_filename + ".webp"
file_extension = ".webp"
# Add optimization param to control performance
save_kwargs = {"quality": quality, "lossless": lossless_webp, "method": 0}
save_kwargs = {
"quality": quality,
"lossless": lossless_webp,
"method": 0,
}
else:
raise ValueError(f"Unsupported file format: {file_format}")
# Full save path
file_path = os.path.join(full_output_folder, file)
@@ -373,7 +638,8 @@ class SaveImageLM:
# Save the image with metadata
try:
if file_format == "png":
if metadata:
assert pnginfo is not None
if save_with_metadata and metadata:
pnginfo.add_text("parameters", metadata)
if embed_workflow and extra_pnginfo is not None:
workflow_json = json.dumps(extra_pnginfo["workflow"])
@@ -382,9 +648,14 @@ class SaveImageLM:
img.save(file_path, format="PNG", **save_kwargs)
elif file_format == "jpeg":
# For JPEG, use piexif
if metadata:
if save_with_metadata and metadata:
try:
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
exif_dict = {
"Exif": {
piexif.ExifIFD.UserComment: b"UNICODE\0"
+ metadata.encode("utf-16be")
}
}
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
@@ -395,13 +666,19 @@ class SaveImageLM:
# For WebP, use piexif for metadata
exif_dict = {}
if metadata:
exif_dict['Exif'] = {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}
if save_with_metadata and metadata:
exif_dict["Exif"] = {
piexif.ExifIFD.UserComment: b"UNICODE\0"
+ metadata.encode("utf-16be")
}
# Add workflow if needed
if embed_workflow and extra_pnginfo is not None:
workflow_json = json.dumps(extra_pnginfo["workflow"])
exif_dict['0th'] = {piexif.ImageIFD.ImageDescription: "Workflow:" + workflow_json}
exif_dict["0th"] = {
piexif.ImageIFD.ImageDescription: "Workflow:"
+ workflow_json
}
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
@@ -410,19 +687,38 @@ class SaveImageLM:
img.save(file_path, format="WEBP", **save_kwargs)
results.append({
"filename": file,
"subfolder": subfolder,
"type": self.type
})
if save_as_recipe:
try:
self._save_image_as_recipe(file_path, metadata_dict)
except Exception as e:
logger.warning(
"Failed to save image as recipe: %s", e, exc_info=True
)
results.append(
{"filename": file, "subfolder": subfolder, "type": self.type}
)
except Exception as e:
logger.error(f"Error saving image: {e}")
return results
def process_image(self, images, id, filename_prefix="ComfyUI", file_format="png", prompt=None, extra_pnginfo=None,
lossless_webp=True, quality=100, embed_workflow=False, add_counter_to_filename=True):
def process_image(
self,
images,
id,
filename_prefix="ComfyUI",
file_format="png",
prompt=None,
extra_pnginfo=None,
lossless_webp=True,
quality=100,
embed_workflow=False,
save_with_metadata=True,
add_counter_to_filename=True,
save_as_recipe=False,
):
"""Process and save image with metadata"""
# Make sure the output directory exists
os.makedirs(self.output_dir, exist_ok=True)
@@ -448,7 +744,12 @@ class SaveImageLM:
lossless_webp,
quality,
embed_workflow,
add_counter_to_filename
save_with_metadata,
add_counter_to_filename,
save_as_recipe,
)
return (images,)
return {
"result": (images,),
"ui": {"images": results},
}

View File

@@ -1,10 +1,15 @@
from __future__ import annotations
from ..services.wildcard_service import contains_dynamic_syntax, get_wildcard_service
class TextLM:
"""A simple text node with autocomplete support."""
NAME = "Text (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = (
"A simple text input node with autocomplete support for tags and styles."
"A simple text input node with autocomplete support for tags, styles, and wildcard expansion."
)
@classmethod
@@ -15,8 +20,17 @@ class TextLM:
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter text... /char, /artist for quick tag search",
"tooltip": "The text output.",
"placeholder": "Enter text... /character, /artist, /wildcard for quick search",
"tooltip": "The text output. Wildcard references inserted with /wildcard are expanded at runtime.",
},
),
},
"optional": {
"seed": (
"INT",
{
"forceInput": True,
"tooltip": "Optional seed for wildcard generation. Leave unconnected for non-deterministic wildcard expansion.",
},
),
},
@@ -24,10 +38,14 @@ class TextLM:
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("STRING",)
OUTPUT_TOOLTIPS = (
"The text output.",
)
OUTPUT_TOOLTIPS = ("The text output.",)
FUNCTION = "process"
def process(self, text: str):
return (text,)
@classmethod
def IS_CHANGED(cls, text: str, seed: int | None = None):
if contains_dynamic_syntax(text) and seed is None:
return float("NaN")
return False
def process(self, text: str, seed: int | None = None):
return (get_wildcard_service().expand_text(text, seed=seed),)

View File

@@ -76,6 +76,9 @@ class TriggerWordToggleLM:
# Filter out empty strings and return as set
return set(word for word in words if word)
def _group_has_child_items(self, item):
return isinstance(item, dict) and isinstance(item.get("items"), list)
def process_trigger_words(
self,
id,
@@ -112,7 +115,11 @@ class TriggerWordToggleLM:
if isinstance(trigger_data, list):
if group_mode:
if allow_strength_adjustment:
if any(self._group_has_child_items(item) for item in trigger_data):
filtered_groups = self._process_group_items(
trigger_data, allow_strength_adjustment
)
elif allow_strength_adjustment:
parsed_items = [
self._parse_trigger_item(
item, allow_strength_adjustment
@@ -174,6 +181,41 @@ class TriggerWordToggleLM:
return (filtered_triggers,)
def _process_group_items(self, trigger_data, allow_strength_adjustment):
filtered_groups = []
for item in trigger_data:
group = self._parse_trigger_item(item, allow_strength_adjustment)
if not group["text"] or not group["active"]:
continue
raw_items = item.get("items") if isinstance(item, dict) else None
if isinstance(raw_items, list):
active_items = []
for raw_item in raw_items:
child = self._parse_trigger_item(
raw_item, allow_strength_adjustment=False
)
if child["text"] and child["active"]:
active_items.append(child["text"])
if not active_items:
continue
group_text = ", ".join(active_items)
else:
group_text = group["text"]
filtered_groups.append(
self._format_word_output(
group_text,
group["strength"],
allow_strength_adjustment,
)
)
return filtered_groups
def _parse_trigger_item(self, item, allow_strength_adjustment):
text = (item.get("text") or "").strip()
active = bool(item.get("active", False))

205
py/nodes/unet_loader.py Normal file
View File

@@ -0,0 +1,205 @@
import logging
import os
from typing import List, Tuple
import comfy.sd # type: ignore
from ..utils.utils import get_checkpoint_info_absolute, _format_model_name_for_comfyui
logger = logging.getLogger(__name__)
class UNETLoaderLM:
"""UNET Loader with support for extra folder paths
Loads diffusion models/UNets from both standard ComfyUI folders and LoRA Manager's
extra folder paths, providing a unified interface for UNET loading.
Supports both regular diffusion models and GGUF format models.
"""
NAME = "Unet Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@classmethod
def INPUT_TYPES(s):
# Get list of unet names from scanner (includes extra folder paths)
unet_names = s._get_unet_names()
return {
"required": {
"unet_name": (
unet_names,
{"tooltip": "The name of the diffusion model to load."},
),
"weight_dtype": (
["default", "fp8_e4m3fn", "fp8_e4m3fn_fast", "fp8_e5m2"],
{"tooltip": "The dtype to use for the model weights."},
),
}
}
RETURN_TYPES = ("MODEL",)
RETURN_NAMES = ("MODEL",)
OUTPUT_TOOLTIPS = ("The model used for denoising latents.",)
FUNCTION = "load_unet"
@classmethod
def _get_unet_names(cls) -> List[str]:
"""Get list of diffusion model names from scanner cache in ComfyUI format (relative path with extension)"""
try:
from ..services.service_registry import ServiceRegistry
import asyncio
async def _get_names():
scanner = await ServiceRegistry.get_checkpoint_scanner()
cache = await scanner.get_cached_data()
# Get all model roots for calculating relative paths
model_roots = scanner.get_model_roots()
# Filter only diffusion_model type and format names
names = []
for item in cache.raw_data:
if item.get("sub_type") == "diffusion_model":
file_path = item.get("file_path", "")
if file_path:
# Format using relative path with OS-native separator
formatted_name = _format_model_name_for_comfyui(
file_path, model_roots
)
if formatted_name:
names.append(formatted_name)
return sorted(names)
try:
loop = asyncio.get_running_loop()
import concurrent.futures
def run_in_thread():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
return new_loop.run_until_complete(_get_names())
finally:
new_loop.close()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(run_in_thread)
return future.result()
except RuntimeError:
return asyncio.run(_get_names())
except Exception as e:
logger.error(f"Error getting unet names: {e}")
return []
def load_unet(self, unet_name: str, weight_dtype: str) -> Tuple:
"""Load a diffusion model by name, supporting extra folder paths
Args:
unet_name: The name of the diffusion model to load (relative path with extension)
weight_dtype: The dtype to use for model weights
Returns:
Tuple of (MODEL,)
"""
import torch
# Get absolute path from cache using ComfyUI-style name
unet_path, metadata = get_checkpoint_info_absolute(unet_name)
if metadata is None:
raise FileNotFoundError(
f"Diffusion model '{unet_name}' not found in LoRA Manager cache. "
"Make sure the model is indexed and try again."
)
# Check if it's a GGUF model
if unet_path.endswith(".gguf"):
return self._load_gguf_unet(unet_path, unet_name, weight_dtype)
# Load regular diffusion model using ComfyUI's API
logger.info(f"Loading diffusion model from: {unet_path}")
# Build model options based on weight_dtype
model_options = {}
if weight_dtype == "fp8_e4m3fn":
model_options["dtype"] = torch.float8_e4m3fn
elif weight_dtype == "fp8_e4m3fn_fast":
model_options["dtype"] = torch.float8_e4m3fn
model_options["fp8_optimizations"] = True
elif weight_dtype == "fp8_e5m2":
model_options["dtype"] = torch.float8_e5m2
model = comfy.sd.load_diffusion_model(unet_path, model_options=model_options)
return (model,)
def _load_gguf_unet(
self, unet_path: str, unet_name: str, weight_dtype: str
) -> Tuple:
"""Load a GGUF format diffusion model
Args:
unet_path: Absolute path to the GGUF file
unet_name: Name of the model for error messages
weight_dtype: The dtype to use for model weights
Returns:
Tuple of (MODEL,)
"""
import torch
from .gguf_import_helper import get_gguf_modules
# Get ComfyUI-GGUF modules using helper (handles various import scenarios)
try:
loader_module, ops_module, nodes_module = get_gguf_modules()
gguf_sd_loader = getattr(loader_module, "gguf_sd_loader")
GGMLOps = getattr(ops_module, "GGMLOps")
GGUFModelPatcher = getattr(nodes_module, "GGUFModelPatcher")
except RuntimeError as e:
raise RuntimeError(f"Cannot load GGUF model '{unet_name}'. {str(e)}")
logger.info(f"Loading GGUF diffusion model from: {unet_path}")
try:
# Load GGUF state dict
sd, extra = gguf_sd_loader(unet_path)
# Prepare kwargs for metadata if supported
kwargs = {}
import inspect
valid_params = inspect.signature(
comfy.sd.load_diffusion_model_state_dict
).parameters
if "metadata" in valid_params:
kwargs["metadata"] = extra.get("metadata", {})
# Setup custom operations with GGUF support
ops = GGMLOps()
# Handle weight_dtype for GGUF models
if weight_dtype in ("default", None):
ops.Linear.dequant_dtype = None
elif weight_dtype in ["target"]:
ops.Linear.dequant_dtype = weight_dtype
else:
ops.Linear.dequant_dtype = getattr(torch, weight_dtype, None)
# Load the model
model = comfy.sd.load_diffusion_model_state_dict(
sd, model_options={"custom_operations": ops}, **kwargs
)
if model is None:
raise RuntimeError(
f"Could not detect model type for GGUF diffusion model: {unet_path}"
)
# Wrap with GGUFModelPatcher
model = GGUFModelPatcher.clone(model)
return (model,)
except Exception as e:
logger.error(f"Error loading GGUF diffusion model '{unet_name}': {e}")
raise RuntimeError(
f"Failed to load GGUF diffusion model '{unet_name}': {str(e)}"
)

View File

@@ -1,33 +1,35 @@
class AnyType(str):
"""A special class that is always equal in not equal comparisons. Credit to pythongosssss"""
"""A special class that is always equal in not equal comparisons. Credit to pythongosssss"""
def __ne__(self, __value: object) -> bool:
return False
def __ne__(self, __value: object) -> bool:
return False
# Credit to Regis Gaughan, III (rgthree)
class FlexibleOptionalInputType(dict):
"""A special class to make flexible nodes that pass data to our python handlers.
"""A special class to make flexible nodes that pass data to our python handlers.
Enables both flexible/dynamic input types (like for Any Switch) or a dynamic number of inputs
(like for Any Switch, Context Switch, Context Merge, Power Lora Loader, etc).
Enables both flexible/dynamic input types (like for Any Switch) or a dynamic number of inputs
(like for Any Switch, Context Switch, Context Merge, Power Lora Loader, etc).
Note, for ComfyUI, all that's needed is the `__contains__` override below, which tells ComfyUI
that our node will handle the input, regardless of what it is.
Note, for ComfyUI, all that's needed is the `__contains__` override below, which tells ComfyUI
that our node will handle the input, regardless of what it is.
However, with https://github.com/comfyanonymous/ComfyUI/pull/2666 a large change would occur
requiring more details on the input itself. There, we need to return a list/tuple where the first
item is the type. This can be a real type, or use the AnyType for additional flexibility.
However, with https://github.com/comfyanonymous/ComfyUI/pull/2666 a large change would occur
requiring more details on the input itself. There, we need to return a list/tuple where the first
item is the type. This can be a real type, or use the AnyType for additional flexibility.
This should be forwards compatible unless more changes occur in the PR.
"""
def __init__(self, type):
self.type = type
This should be forwards compatible unless more changes occur in the PR.
"""
def __getitem__(self, key):
return (self.type, )
def __init__(self, type):
self.type = type
def __contains__(self, key):
return True
def __getitem__(self, key):
return (self.type,)
def __contains__(self, key):
return True
any_type = AnyType("*")
@@ -37,25 +39,27 @@ import os
import logging
import copy
import sys
import folder_paths
import folder_paths # type: ignore
logger = logging.getLogger(__name__)
def extract_lora_name(lora_path):
"""Extract the lora name from a lora path (e.g., 'IL\\aorunIllstrious.safetensors' -> 'aorunIllstrious')"""
# Get the basename without extension
basename = os.path.basename(lora_path)
return os.path.splitext(basename)[0]
def get_loras_list(kwargs):
"""Helper to extract loras list from either old or new kwargs format"""
if 'loras' not in kwargs:
if "loras" not in kwargs:
return []
loras_data = kwargs['loras']
loras_data = kwargs["loras"]
# Handle new format: {'loras': {'__value__': [...]}}
if isinstance(loras_data, dict) and '__value__' in loras_data:
return loras_data['__value__']
if isinstance(loras_data, dict) and "__value__" in loras_data:
return loras_data["__value__"]
# Handle old format: {'loras': [...]}
elif isinstance(loras_data, list):
return loras_data
@@ -64,23 +68,25 @@ def get_loras_list(kwargs):
logger.warning(f"Unexpected loras format: {type(loras_data)}")
return []
def load_state_dict_in_safetensors(path, device="cpu", filter_prefix=""):
"""Simplified version of load_state_dict_in_safetensors that just loads from a local path"""
import safetensors.torch
state_dict = {}
with safetensors.torch.safe_open(path, framework="pt", device=device) as f:
with safetensors.torch.safe_open(path, framework="pt", device=device) as f: # type: ignore[attr-defined]
for k in f.keys():
if filter_prefix and not k.startswith(filter_prefix):
continue
state_dict[k.removeprefix(filter_prefix)] = f.get_tensor(k)
return state_dict
def to_diffusers(input_lora):
"""Simplified version of to_diffusers for Flux LoRA conversion"""
import torch
from diffusers.utils.state_dict_utils import convert_unet_state_dict_to_peft
from diffusers.loaders import FluxLoraLoaderMixin
from diffusers.loaders import FluxLoraLoaderMixin # type: ignore[attr-defined]
if isinstance(input_lora, str):
tensors = load_state_dict_in_safetensors(input_lora, device="cpu")
@@ -97,10 +103,15 @@ def to_diffusers(input_lora):
return new_tensors
def nunchaku_load_lora(model, lora_name, lora_strength):
"""Load a Flux LoRA for Nunchaku model"""
# Get full path to the LoRA file. Allow both direct paths and registered LoRA names.
lora_path = lora_name if os.path.isfile(lora_name) else folder_paths.get_full_path("loras", lora_name)
lora_path = (
lora_name
if os.path.isfile(lora_name)
else folder_paths.get_full_path("loras", lora_name)
)
if not lora_path or not os.path.isfile(lora_path):
logger.warning("Skipping LoRA '%s' because it could not be found", lora_name)
return model
@@ -118,7 +129,9 @@ def nunchaku_load_lora(model, lora_name, lora_strength):
ret_model_wrapper.loras = [*model_wrapper.loras, (lora_path, lora_strength)]
else:
# Fallback to legacy logic
logger.warning("Please upgrade ComfyUI-nunchaku to 1.1.0 or above for better LoRA support. Falling back to legacy loading logic.")
logger.warning(
"Please upgrade ComfyUI-nunchaku to 1.1.0 or above for better LoRA support. Falling back to legacy loading logic."
)
transformer = model_wrapper.model
# Save the transformer temporarily
@@ -145,3 +158,24 @@ def nunchaku_load_lora(model, lora_name, lora_strength):
ret_model.model.model_config.unet_config["in_channels"] = new_in_channels
return ret_model
def detect_nunchaku_model_kind(model):
"""Return the supported Nunchaku model kind for a Comfy model, if any."""
try:
model_wrapper = model.model.diffusion_model
except (AttributeError, TypeError):
return None
wrapper_name = model_wrapper.__class__.__name__
if wrapper_name == "ComfyFluxWrapper":
return "flux"
inner_model = getattr(model_wrapper, "model", None)
inner_name = inner_model.__class__.__name__ if inner_model is not None else ""
if wrapper_name.endswith("NunchakuQwenImageTransformer2DModel"):
return "qwen_image"
if inner_name.endswith("NunchakuQwenImageTransformer2DModel"):
return "qwen_image"
return None

View File

@@ -1,10 +1,22 @@
import folder_paths # type: ignore
from ..utils.utils import get_lora_info
import os
from ..utils.utils import get_lora_info_absolute
from ..config import config
from .utils import FlexibleOptionalInputType, any_type, get_loras_list
import logging
logger = logging.getLogger(__name__)
def _relpath_within_loras(abs_path):
"""Return abs_path relative to the first matching lora root, or basename as fallback."""
all_roots = list(config.loras_roots or []) + list(config.extra_loras_roots or [])
for root in all_roots:
try:
return os.path.relpath(abs_path, root)
except ValueError:
continue
return os.path.basename(abs_path)
class WanVideoLoraSelectLM:
NAME = "WanVideo Lora Select (LoraManager)"
CATEGORY = "Lora Manager/stackers"
@@ -56,13 +68,13 @@ class WanVideoLoraSelectLM:
clip_strength = float(lora.get('clipStrength', model_strength))
# Get lora path and trigger words
lora_path, trigger_words = get_lora_info(lora_name)
lora_path, trigger_words = get_lora_info_absolute(lora_name)
# Create lora item for WanVideo format
lora_item = {
"path": folder_paths.get_full_path("loras", lora_path),
"path": lora_path,
"strength": model_strength,
"name": lora_path.split(".")[0],
"name": os.path.splitext(_relpath_within_loras(lora_path))[0],
"blocks": selected_blocks,
"layer_filter": layer_filter,
"low_mem_load": low_mem_load,

View File

@@ -1,11 +1,23 @@
import folder_paths # type: ignore
from ..utils.utils import get_lora_info
import os
from ..utils.utils import get_lora_info_absolute
from ..config import config
from .utils import any_type
import logging
# 初始化日志记录器
logger = logging.getLogger(__name__)
def _relpath_within_loras(abs_path):
"""Return abs_path relative to the first matching lora root, or basename as fallback."""
all_roots = list(config.loras_roots or []) + list(config.extra_loras_roots or [])
for root in all_roots:
try:
return os.path.relpath(abs_path, root)
except ValueError:
continue
return os.path.basename(abs_path)
# 定义新节点的类
class WanVideoLoraTextSelectLM:
# 节点在UI中显示的名称
@@ -87,12 +99,12 @@ class WanVideoLoraTextSelectLM:
else:
continue
lora_path, trigger_words = get_lora_info(lora_name_raw)
lora_path, trigger_words = get_lora_info_absolute(lora_name_raw)
lora_item = {
"path": folder_paths.get_full_path("loras", lora_path),
"path": lora_path,
"strength": model_strength,
"name": lora_path.split(".")[0],
"name": os.path.splitext(_relpath_within_loras(lora_path))[0],
"blocks": selected_blocks,
"layer_filter": layer_filter,
"low_mem_load": low_mem_load,

View File

@@ -13,4 +13,5 @@ GEN_PARAM_KEYS = [
'seed',
'size',
'clip_skip',
'denoising_strength',
]

View File

@@ -1,11 +1,11 @@
import logging
import json
import re
import os
from typing import Any, Dict, Optional
from .merger import GenParamsMerger
from .base import RecipeMetadataParser
from ..services.metadata_service import get_default_metadata_provider
from ..utils.civitai_utils import extract_civitai_image_id
logger = logging.getLogger(__name__)
@@ -39,11 +39,12 @@ class RecipeEnricher:
source_url = recipe.get("source_url") or recipe.get("source_path", "")
# Check if it's a Civitai image URL
image_id_match = re.search(r'civitai\.com/images/(\d+)', str(source_url))
if image_id_match:
image_id = image_id_match.group(1)
image_id = extract_civitai_image_id(str(source_url))
if image_id:
try:
image_info = await civitai_client.get_image_info(image_id)
image_info = await civitai_client.get_image_info(
image_id, source_url=str(source_url)
)
if image_info:
# Handle nested meta often found in Civitai API responses
raw_meta = image_info.get("meta")

View File

@@ -6,17 +6,19 @@ from .parsers import (
ComfyMetadataParser,
MetaFormatParser,
AutomaticMetadataParser,
CivitaiApiMetadataParser
CivitaiApiMetadataParser,
SuiImageParamsParser,
)
from .base import RecipeMetadataParser
logger = logging.getLogger(__name__)
class RecipeParserFactory:
"""Factory for creating recipe metadata parsers"""
@staticmethod
def create_parser(metadata) -> RecipeMetadataParser:
def create_parser(metadata) -> RecipeMetadataParser | None:
"""
Create appropriate parser based on the metadata content
@@ -38,6 +40,7 @@ class RecipeParserFactory:
# Convert dict to string for other parsers that expect string input
try:
import json
metadata_str = json.dumps(metadata)
except Exception as e:
logger.debug(f"Failed to convert dict to JSON string: {e}")
@@ -53,6 +56,13 @@ class RecipeParserFactory:
# If JSON parsing fails, move on to other parsers
pass
# Try SuiImageParamsParser for SuiImage metadata format
try:
if SuiImageParamsParser().is_metadata_matching(metadata_str):
return SuiImageParamsParser()
except Exception:
pass
# Check other parsers that expect string input
if RecipeFormatParser().is_metadata_matching(metadata_str):
return RecipeFormatParser()

View File

@@ -1,27 +1,33 @@
from typing import Any, Dict, Optional
import logging
from .constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class GenParamsMerger:
"""Utility to merge generation parameters from multiple sources with priority."""
ALLOWED_KEYS = set(GEN_PARAM_KEYS)
BLACKLISTED_KEYS = {
"id", "url", "userId", "username", "createdAt", "updatedAt", "hash", "meta",
"draft", "extra", "width", "height", "process", "quantity", "workflow",
"baseModel", "resources", "disablePoi", "aspectRatio", "Created Date",
"experimental", "civitaiResources", "civitai_resources", "Civitai resources",
"modelVersionId", "modelId", "hashes", "Model", "Model hash", "checkpoint_hash",
"checkpoint", "checksum", "model_checksum"
"checkpoint", "checksum", "model_checksum", "raw_metadata",
}
NORMALIZATION_MAPPING = {
# Civitai specific
"cfg": "cfg_scale",
"cfgScale": "cfg_scale",
"clipSkip": "clip_skip",
"negativePrompt": "negative_prompt",
# Case variations
"Sampler": "sampler",
"sampler_name": "sampler",
"scheduler": "sampler",
"Steps": "steps",
"Seed": "seed",
"Size": "size",
@@ -36,63 +42,40 @@ class GenParamsMerger:
def merge(
request_params: Optional[Dict[str, Any]] = None,
civitai_meta: Optional[Dict[str, Any]] = None,
embedded_metadata: Optional[Dict[str, Any]] = None
embedded_metadata: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Merge generation parameters from three sources.
Priority: request_params > civitai_meta > embedded_metadata
Args:
request_params: Params provided directly in the import request
civitai_meta: Params from Civitai Image API 'meta' field
embedded_metadata: Params extracted from image EXIF/embedded metadata
Returns:
Merged parameters dictionary
"""
result = {}
result: Dict[str, Any] = {}
# 1. Start with embedded metadata (lowest priority)
if embedded_metadata:
# If it's a full recipe metadata, we use its gen_params
if "gen_params" in embedded_metadata and isinstance(embedded_metadata["gen_params"], dict):
if "gen_params" in embedded_metadata and isinstance(
embedded_metadata["gen_params"], dict
):
GenParamsMerger._update_normalized(result, embedded_metadata["gen_params"])
else:
# Otherwise assume the dict itself contains gen_params
GenParamsMerger._update_normalized(result, embedded_metadata)
# 2. Layer Civitai meta (medium priority)
if civitai_meta:
GenParamsMerger._update_normalized(result, civitai_meta)
# 3. Layer request params (highest priority)
if request_params:
GenParamsMerger._update_normalized(result, request_params)
# Filter out blacklisted keys and also the original camelCase keys if they were normalized
final_result = {}
for k, v in result.items():
if k in GenParamsMerger.BLACKLISTED_KEYS:
continue
if k in GenParamsMerger.NORMALIZATION_MAPPING:
continue
final_result[k] = v
return final_result
return result
@staticmethod
def _update_normalized(target: Dict[str, Any], source: Dict[str, Any]) -> None:
"""Update target dict with normalized keys from source."""
for k, v in source.items():
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(k, k)
target[normalized_key] = v
# Also keep the original key for now if it's not the same,
# so we can filter at the end or avoid losing it if it wasn't supposed to be renamed?
# Actually, if we rename it, we should probably NOT keep both in 'target'
# because we want to filter them out at the end anyway.
if normalized_key != k:
# If we are overwriting an existing snake_case key with a camelCase one's value,
# that's fine because of the priority order of calls to _update_normalized.
pass
target[k] = v
"""Update target dict with normalized, persistence-safe keys from source."""
for key, value in source.items():
if key in GenParamsMerger.BLACKLISTED_KEYS:
continue
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(key, key)
if normalized_key not in GenParamsMerger.ALLOWED_KEYS:
continue
target[normalized_key] = value

View File

@@ -5,6 +5,7 @@ from .comfy import ComfyMetadataParser
from .meta_format import MetaFormatParser
from .automatic import AutomaticMetadataParser
from .civitai_image import CivitaiApiMetadataParser
from .sui_image_params import SuiImageParamsParser
__all__ = [
'RecipeFormatParser',
@@ -12,4 +13,5 @@ __all__ = [
'MetaFormatParser',
'AutomaticMetadataParser',
'CivitaiApiMetadataParser',
'SuiImageParamsParser',
]

View File

@@ -9,6 +9,7 @@ from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
class CivitaiApiMetadataParser(RecipeMetadataParser):
"""Parser for Civitai image metadata format"""
@@ -40,7 +41,8 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
"width",
"height",
"Model",
"Model hash"
"Model hash",
"modelVersionIds",
)
return any(key in payload for key in civitai_image_fields)
@@ -50,7 +52,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Check for LoRA hash patterns
hashes = metadata.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
if isinstance(hashes, dict) and any(
str(key).lower().startswith("lora:") for key in hashes
):
return True
# Check nested meta object (common in CivitAI image responses)
@@ -61,22 +65,28 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Also check for LoRA hash patterns in nested meta
hashes = nested_meta.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
if isinstance(hashes, dict) and any(
str(key).lower().startswith("lora:") for key in hashes
):
return True
return False
async def parse_metadata(self, metadata, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
async def parse_metadata( # type: ignore[override]
self, user_comment, recipe_scanner=None, civitai_client=None
) -> Dict[str, Any]:
"""Parse metadata from Civitai image format
Args:
metadata: The metadata from the image (dict)
user_comment: The metadata from the image (dict)
recipe_scanner: Optional recipe scanner service
civitai_client: Optional Civitai API client (deprecated, use metadata_provider instead)
Returns:
Dict containing parsed recipe data
"""
metadata: Dict[str, Any] = user_comment # type: ignore[assignment]
metadata = user_comment
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
@@ -103,11 +113,11 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Initialize result structure
result = {
'base_model': None,
'loras': [],
'model': None,
'gen_params': {},
'from_civitai_image': True
"base_model": None,
"loras": [],
"model": None,
"gen_params": {},
"from_civitai_image": True,
}
# Track already added LoRAs to prevent duplicates
@@ -148,16 +158,25 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
result["base_model"] = metadata["baseModel"]
elif "Model hash" in metadata and metadata_provider:
model_hash = metadata["Model hash"]
model_info, error = await metadata_provider.get_model_by_hash(model_hash)
model_info, error = await metadata_provider.get_model_by_hash(
model_hash
)
if model_info:
result["base_model"] = model_info.get("baseModel", "")
elif "Model" in metadata and isinstance(metadata.get("resources"), list):
# Try to find base model in resources
for resource in metadata.get("resources", []):
if resource.get("type") == "model" and resource.get("name") == metadata.get("Model"):
if resource.get("type") == "model" and resource.get(
"name"
) == metadata.get("Model"):
# This is likely the checkpoint model
if metadata_provider and resource.get("hash"):
model_info, error = await metadata_provider.get_model_by_hash(resource.get("hash"))
(
model_info,
error,
) = await metadata_provider.get_model_by_hash(
resource.get("hash")
)
if model_info:
result["base_model"] = model_info.get("baseModel", "")
@@ -176,7 +195,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Skip LoRAs without proper identification (hash or modelVersionId)
if not lora_hash and not resource.get("modelVersionId"):
logger.debug(f"Skipping LoRA resource '{resource.get('name', 'Unknown')}' - no hash or modelVersionId")
logger.debug(
f"Skipping LoRA resource '{resource.get('name', 'Unknown')}' - no hash or modelVersionId"
)
continue
# Skip if we've already added this LoRA by hash
@@ -184,31 +205,33 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
continue
lora_entry = {
'name': resource.get("name", "Unknown LoRA"),
'type': "lora",
'weight': float(resource.get("weight", 1.0)),
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': resource.get("name", "Unknown"),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"name": resource.get("name", "Unknown LoRA"),
"type": "lora",
"weight": float(resource.get("weight", 1.0)),
"hash": lora_hash,
"existsLocally": False,
"localPath": None,
"file_name": resource.get("name", "Unknown"),
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
# Try to get info from Civitai if hash is available
if lora_entry['hash'] and metadata_provider:
if lora_entry["hash"] and metadata_provider:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
civitai_info = (
await metadata_provider.get_model_by_hash(lora_hash)
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
lora_hash,
)
if populated_entry is None:
@@ -217,10 +240,14 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
lora_entry = populated_entry
# If we have a version ID from Civitai, track it for deduplication
if 'id' in lora_entry and lora_entry['id']:
added_loras[str(lora_entry['id'])] = len(result["loras"])
if "id" in lora_entry and lora_entry["id"]:
added_loras[str(lora_entry["id"])] = len(
result["loras"]
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}")
logger.error(
f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}"
)
# Track by hash if we have it
if lora_hash:
@@ -229,7 +256,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
result["loras"].append(lora_entry)
# Process civitaiResources array
if "civitaiResources" in metadata and isinstance(metadata["civitaiResources"], list):
if "civitaiResources" in metadata and isinstance(
metadata["civitaiResources"], list
):
for resource in metadata["civitaiResources"]:
# Get resource type and identifier
resource_type = str(resource.get("type") or "").lower()
@@ -237,32 +266,39 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if resource_type == "checkpoint":
checkpoint_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown Checkpoint"),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "checkpoint"),
'existsLocally': False,
'localPath': None,
'file_name': resource.get("modelName", ""),
'hash': resource.get("hash", "") or "",
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"id": resource.get("modelVersionId", 0),
"modelId": resource.get("modelId", 0),
"name": resource.get("modelName", "Unknown Checkpoint"),
"version": resource.get("modelVersionName", ""),
"type": resource.get("type", "checkpoint"),
"existsLocally": False,
"localPath": None,
"file_name": resource.get("modelName", ""),
"hash": resource.get("hash", "") or "",
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
if version_id and metadata_provider:
try:
civitai_info = await metadata_provider.get_model_version_info(version_id)
civitai_info = (
await metadata_provider.get_model_version_info(
version_id
)
)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
checkpoint_entry = (
await self.populate_checkpoint_from_civitai(
checkpoint_entry, civitai_info
)
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint version {version_id}: {e}")
logger.error(
f"Error fetching Civitai info for checkpoint version {version_id}: {e}"
)
if result["model"] is None:
result["model"] = checkpoint_entry
@@ -275,31 +311,35 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Initialize lora entry
lora_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown LoRA"),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "lora"),
'weight': round(float(resource.get("weight", 1.0)), 2),
'existsLocally': False,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"id": resource.get("modelVersionId", 0),
"modelId": resource.get("modelId", 0),
"name": resource.get("modelName", "Unknown LoRA"),
"version": resource.get("modelVersionName", ""),
"type": resource.get("type", "lora"),
"weight": round(float(resource.get("weight", 1.0)), 2),
"existsLocally": False,
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
# Try to get info from Civitai if modelVersionId is available
if version_id and metadata_provider:
try:
# Use get_model_version_info instead of get_model_version
civitai_info = await metadata_provider.get_model_version_info(version_id)
civitai_info = (
await metadata_provider.get_model_version_info(
version_id
)
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
base_model_counts,
)
if populated_entry is None:
@@ -307,7 +347,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for model version {version_id}: {e}")
logger.error(
f"Error fetching Civitai info for model version {version_id}: {e}"
)
# Track this LoRA in our deduplication dict
if version_id:
@@ -316,10 +358,15 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
result["loras"].append(lora_entry)
# Process additionalResources array
if "additionalResources" in metadata and isinstance(metadata["additionalResources"], list):
if "additionalResources" in metadata and isinstance(
metadata["additionalResources"], list
):
for resource in metadata["additionalResources"]:
# Skip resources that aren't LoRAs or LyCORIS
if resource.get("type") not in ["lora", "lycoris"] and "type" not in resource:
if (
resource.get("type") not in ["lora", "lycoris"]
and "type" not in resource
):
continue
lora_type = resource.get("type", "lora")
@@ -337,31 +384,35 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
continue
lora_entry = {
'name': name,
'type': lora_type,
'weight': float(resource.get("strength", 1.0)),
'hash': "",
'existsLocally': False,
'localPath': None,
'file_name': name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"name": name,
"type": lora_type,
"weight": float(resource.get("strength", 1.0)),
"hash": "",
"existsLocally": False,
"localPath": None,
"file_name": name,
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
# If we have a version ID and metadata provider, try to get more info
if version_id and metadata_provider:
try:
# Use get_model_version_info with the version ID
civitai_info = await metadata_provider.get_model_version_info(version_id)
civitai_info = (
await metadata_provider.get_model_version_info(
version_id
)
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
base_model_counts,
)
if populated_entry is None:
@@ -373,10 +424,71 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if version_id:
added_loras[version_id] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for model ID {version_id}: {e}")
logger.error(
f"Error fetching Civitai info for model ID {version_id}: {e}"
)
result["loras"].append(lora_entry)
# Process modelVersionIds from Civitai image API
# These are model version IDs returned at root level when meta doesn't contain resources
if "modelVersionIds" in metadata and isinstance(
metadata["modelVersionIds"], list
):
for version_id in metadata["modelVersionIds"]:
version_id_str = str(version_id)
# Skip if we've already added this LoRA by version ID
if version_id_str in added_loras:
continue
# Initialize lora entry with version ID
lora_entry = {
"id": version_id,
"modelId": 0,
"name": "Unknown LoRA",
"version": "",
"type": "lora",
"weight": 1.0,
"existsLocally": False,
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
# Fetch model info from Civitai
if metadata_provider and version_id_str:
try:
civitai_info = (
await metadata_provider.get_model_version_info(
version_id_str
)
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(
f"Error fetching Civitai info for model version {version_id}: {e}"
)
# Track this LoRA for deduplication
if version_id_str:
added_loras[version_id_str] = len(result["loras"])
result["loras"].append(lora_entry)
# If we found LoRA hashes in the metadata but haven't already
# populated entries for them, fall back to creating LoRAs from
# the hashes section. Some Civitai image responses only include
@@ -390,30 +502,32 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
continue
lora_entry = {
'name': lora_name,
'type': "lora",
'weight': 1.0,
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"name": lora_name,
"type": "lora",
"weight": 1.0,
"hash": lora_hash,
"existsLocally": False,
"localPath": None,
"file_name": lora_name,
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
if metadata_provider:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
civitai_info = await metadata_provider.get_model_by_hash(
lora_hash
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
lora_hash,
)
if populated_entry is None:
@@ -421,20 +535,27 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
lora_entry = populated_entry
if 'id' in lora_entry and lora_entry['id']:
added_loras[str(lora_entry['id'])] = len(result["loras"])
if "id" in lora_entry and lora_entry["id"]:
added_loras[str(lora_entry["id"])] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_hash}: {e}")
logger.error(
f"Error fetching Civitai info for LoRA hash {lora_hash}: {e}"
)
added_loras[lora_hash] = len(result["loras"])
result["loras"].append(lora_entry)
# Check for LoRA info in the format "Lora_0 Model hash", "Lora_0 Model name", etc.
lora_index = 0
while f"Lora_{lora_index} Model hash" in metadata and f"Lora_{lora_index} Model name" in metadata:
while (
f"Lora_{lora_index} Model hash" in metadata
and f"Lora_{lora_index} Model name" in metadata
):
lora_hash = metadata[f"Lora_{lora_index} Model hash"]
lora_name = metadata[f"Lora_{lora_index} Model name"]
lora_strength_model = float(metadata.get(f"Lora_{lora_index} Strength model", 1.0))
lora_strength_model = float(
metadata.get(f"Lora_{lora_index} Strength model", 1.0)
)
# Skip if we've already added this LoRA by hash
if lora_hash and lora_hash in added_loras:
@@ -442,31 +563,33 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
continue
lora_entry = {
'name': lora_name,
'type': "lora",
'weight': lora_strength_model,
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
"name": lora_name,
"type": "lora",
"weight": lora_strength_model,
"hash": lora_hash,
"existsLocally": False,
"localPath": None,
"file_name": lora_name,
"thumbnailUrl": "/loras_static/images/no-preview.png",
"baseModel": "",
"size": 0,
"downloadUrl": "",
"isDeleted": False,
}
# Try to get info from Civitai if hash is available
if lora_entry['hash'] and metadata_provider:
if lora_entry["hash"] and metadata_provider:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
civitai_info = await metadata_provider.get_model_by_hash(
lora_hash
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
lora_hash,
)
if populated_entry is None:
@@ -476,10 +599,12 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
lora_entry = populated_entry
# If we have a version ID from Civitai, track it for deduplication
if 'id' in lora_entry and lora_entry['id']:
added_loras[str(lora_entry['id'])] = len(result["loras"])
if "id" in lora_entry and lora_entry["id"]:
added_loras[str(lora_entry["id"])] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}")
logger.error(
f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}"
)
# Track by hash if we have it
if lora_hash:
@@ -491,7 +616,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# If base model wasn't found earlier, use the most common one from LoRAs
if not result["base_model"] and base_model_counts:
result["base_model"] = max(base_model_counts.items(), key=lambda x: x[1])[0]
result["base_model"] = max(
base_model_counts.items(), key=lambda x: x[1]
)[0]
return result

View File

@@ -0,0 +1,188 @@
"""Parser for SuiImage (Stable Diffusion WebUI) metadata format."""
import json
import logging
from typing import Dict, Any, Optional, List
from ..base import RecipeMetadataParser
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
class SuiImageParamsParser(RecipeMetadataParser):
"""Parser for SuiImage metadata JSON format.
This format is used by some Stable Diffusion WebUI variants.
Structure:
{
"sui_image_params": {
"prompt": "...",
"negativeprompt": "...",
"model": "...",
"seed": ...,
"steps": ...,
...
},
"sui_models": [
{"name": "...", "param": "model", "hash": "..."},
...
],
"sui_extra_data": {...}
}
"""
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the SuiImage metadata format"""
try:
data = json.loads(user_comment)
return isinstance(data, dict) and 'sui_image_params' in data
except (json.JSONDecodeError, TypeError):
return False
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from SuiImage metadata format"""
try:
metadata_provider = await get_default_metadata_provider()
data = json.loads(user_comment)
params = data.get('sui_image_params', {})
models = data.get('sui_models', [])
# Extract prompt and negative prompt
prompt = params.get('prompt', '')
negative_prompt = params.get('negativeprompt', '') or params.get('negative_prompt', '')
# Extract generation parameters
gen_params = {}
if prompt:
gen_params['prompt'] = prompt
if negative_prompt:
gen_params['negative_prompt'] = negative_prompt
# Map standard parameters
param_mapping = {
'steps': 'steps',
'seed': 'seed',
'cfgscale': 'cfg_scale',
'cfg_scale': 'cfg_scale',
'width': 'width',
'height': 'height',
'sampler': 'sampler',
'scheduler': 'scheduler',
'model': 'model',
'vae': 'vae',
}
for src_key, dest_key in param_mapping.items():
if src_key in params and params[src_key] is not None:
gen_params[dest_key] = params[src_key]
# Add size info if available
if 'width' in gen_params and 'height' in gen_params:
gen_params['size'] = f"{gen_params['width']}x{gen_params['height']}"
# Process models - extract checkpoint and loras
loras: List[Dict[str, Any]] = []
checkpoint: Optional[Dict[str, Any]] = None
for model in models:
model_name = model.get('name', '')
param_type = model.get('param', '')
model_hash = model.get('hash', '')
# Remove .safetensors extension for cleaner name
clean_name = model_name.replace('.safetensors', '') if model_name else ''
# Check if this is a LoRA by looking at the name or param type
is_lora = 'lora' in model_name.lower() or param_type.lower().startswith('lora')
if is_lora:
lora_entry = {
'id': 0,
'modelId': 0,
'name': clean_name,
'version': '',
'type': 'lora',
'weight': 1.0,
'existsLocally': False,
'localPath': None,
'file_name': model_name,
'hash': model_hash.replace('0x', '') if model_hash.startswith('0x') else model_hash,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get additional info from metadata provider
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(
model_hash.replace('0x', '') if model_hash.startswith('0x') else model_hash
)
if civitai_info:
lora_entry = await self.populate_lora_from_civitai(
lora_entry, civitai_info, recipe_scanner
)
except Exception as e:
logger.debug(f"Error fetching info for LoRA {clean_name}: {e}")
if lora_entry:
loras.append(lora_entry)
elif param_type == 'model' or 'lora' not in model_name.lower():
# This is likely a checkpoint
checkpoint_entry = {
'id': 0,
'modelId': 0,
'name': clean_name,
'version': '',
'type': 'checkpoint',
'hash': model_hash.replace('0x', '') if model_hash.startswith('0x') else model_hash,
'existsLocally': False,
'localPath': None,
'file_name': model_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get additional info from metadata provider
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(
model_hash.replace('0x', '') if model_hash.startswith('0x') else model_hash
)
if civitai_info:
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry, civitai_info
)
except Exception as e:
logger.debug(f"Error fetching info for checkpoint {clean_name}: {e}")
checkpoint = checkpoint_entry
# Determine base model from loras or checkpoint
base_model = None
if loras:
base_models = [lora.get('baseModel') for lora in loras if lora.get('baseModel')]
if base_models:
from collections import Counter
base_model_counts = Counter(base_models)
base_model = base_model_counts.most_common(1)[0][0]
elif checkpoint and checkpoint.get('baseModel'):
base_model = checkpoint['baseModel']
return {
'base_model': base_model,
'loras': loras,
'checkpoint': checkpoint,
'gen_params': gen_params,
'from_sui_image_params': True
}
except Exception as e:
logger.error(f"Error parsing SuiImage metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

View File

@@ -251,7 +251,7 @@ class BaseModelRoutes(ABC):
def _find_model_file(self, files):
"""Find the appropriate model file from the files list - can be overridden by subclasses."""
return next((file for file in files if file.get("type") == "Model" and file.get("primary") is True), None)
return next((file for file in files if file.get("type") in ("Model", "Diffusion Model") and file.get("primary") is True), None)
def get_handler(self, name: str) -> Callable[[web.Request], web.StreamResponse]:
"""Expose handlers for subclasses or tests."""

View File

@@ -1,4 +1,5 @@
"""Base infrastructure shared across recipe routes."""
from __future__ import annotations
import logging
@@ -16,12 +17,14 @@ from ..services.recipes import (
RecipePersistenceService,
RecipeSharingService,
)
from ..services.batch_import_service import BatchImportService
from ..services.server_i18n import server_i18n
from ..services.service_registry import ServiceRegistry
from ..services.settings_manager import get_settings_manager
from ..utils.constants import CARD_PREVIEW_WIDTH
from ..utils.exif_utils import ExifUtils
from .handlers.recipe_handlers import (
BatchImportHandler,
RecipeAnalysisHandler,
RecipeHandlerSet,
RecipeListingHandler,
@@ -116,7 +119,10 @@ class BaseRecipeRoutes:
recipe_scanner_getter = lambda: self.recipe_scanner
civitai_client_getter = lambda: self.civitai_client
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = (
os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"
or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
)
if not standalone_mode:
from ..metadata_collector import get_metadata # type: ignore[import-not-found]
from ..metadata_collector.metadata_processor import ( # type: ignore[import-not-found]
@@ -190,6 +196,22 @@ class BaseRecipeRoutes:
sharing_service=sharing_service,
)
from ..services.websocket_manager import ws_manager
batch_import_service = BatchImportService(
analysis_service=analysis_service,
persistence_service=persistence_service,
ws_manager=ws_manager,
logger=logger,
)
batch_import = BatchImportHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
civitai_client_getter=civitai_client_getter,
logger=logger,
batch_import_service=batch_import_service,
)
return RecipeHandlerSet(
page_view=page_view,
listing=listing,
@@ -197,4 +219,5 @@ class BaseRecipeRoutes:
management=management,
analysis=analysis,
sharing=sharing,
batch_import=batch_import,
)

View File

@@ -1,5 +1,5 @@
import logging
from typing import Dict
from typing import Dict, List, Set
from aiohttp import web
from .base_model_routes import BaseModelRoutes
@@ -82,12 +82,22 @@ class CheckpointRoutes(BaseModelRoutes):
return web.json_response({"error": str(e)}, status=500)
async def get_checkpoints_roots(self, request: web.Request) -> web.Response:
"""Return the list of checkpoint roots from config"""
"""Return the list of checkpoint roots from config (including extra paths)"""
try:
roots = config.checkpoints_roots
# Merge checkpoints_roots with extra_checkpoints_roots, preserving order and removing duplicates
roots: List[str] = []
roots.extend(config.checkpoints_roots or [])
roots.extend(config.extra_checkpoints_roots or [])
# Remove duplicates while preserving order
seen: set = set()
unique_roots: List[str] = []
for root in roots:
if root and root not in seen:
seen.add(root)
unique_roots.append(root)
return web.json_response({
"success": True,
"roots": roots
"roots": unique_roots
})
except Exception as e:
logger.error(f"Error getting checkpoint roots: {e}", exc_info=True)
@@ -97,12 +107,22 @@ class CheckpointRoutes(BaseModelRoutes):
}, status=500)
async def get_unet_roots(self, request: web.Request) -> web.Response:
"""Return the list of unet roots from config"""
"""Return the list of unet roots from config (including extra paths)"""
try:
roots = config.unet_roots
# Merge unet_roots with extra_unet_roots, preserving order and removing duplicates
roots: List[str] = []
roots.extend(config.unet_roots or [])
roots.extend(config.extra_unet_roots or [])
# Remove duplicates while preserving order
seen: set = set()
unique_roots: List[str] = []
for root in roots:
if root and root not in seen:
seen.add(root)
unique_roots.append(root)
return web.json_response({
"success": True,
"roots": roots
"roots": unique_roots
})
except Exception as e:
logger.error(f"Error getting unet roots: {e}", exc_info=True)

View File

@@ -0,0 +1,141 @@
"""Handlers for base model related endpoints."""
from __future__ import annotations
import logging
from typing import Any, Awaitable, Callable, Dict
from aiohttp import web
from ...services.civitai_base_model_service import get_civitai_base_model_service
logger = logging.getLogger(__name__)
class BaseModelHandlerSet:
"""Collection of handlers for base model operations."""
def __init__(
self,
base_model_service_factory: Callable[[], Any] = get_civitai_base_model_service,
) -> None:
self._base_model_service_factory = base_model_service_factory
def to_route_mapping(
self,
) -> Dict[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
"""Return mapping of route names to handler methods."""
return {
"get_base_models": self.get_base_models,
"refresh_base_models": self.refresh_base_models,
"get_base_model_categories": self.get_base_model_categories,
"get_base_model_cache_status": self.get_base_model_cache_status,
}
async def get_base_models(self, request: web.Request) -> web.Response:
"""Get merged base models (hardcoded + remote from Civitai).
Query Parameters:
refresh: If 'true', force refresh from API
Returns:
JSON response with:
- models: List of base model names
- source: 'cache', 'api', or 'fallback'
- last_updated: ISO timestamp
- counts: hardcoded_count, remote_count, merged_count
"""
try:
service = await self._base_model_service_factory()
# Check for refresh parameter
force_refresh = request.query.get("refresh", "").lower() == "true"
result = await service.get_base_models(force_refresh=force_refresh)
return web.json_response(
{
"success": True,
"data": result,
}
)
except Exception as e:
logger.error(f"Error in get_base_models: {e}")
return web.json_response(
{"success": False, "error": str(e)},
status=500,
)
async def refresh_base_models(self, request: web.Request) -> web.Response:
"""Force refresh base models from Civitai API.
Returns:
JSON response with refreshed data
"""
try:
service = await self._base_model_service_factory()
result = await service.refresh_cache()
return web.json_response(
{
"success": True,
"data": result,
"message": "Base models cache refreshed successfully",
}
)
except Exception as e:
logger.error(f"Error in refresh_base_models: {e}")
return web.json_response(
{"success": False, "error": str(e)},
status=500,
)
async def get_base_model_categories(self, request: web.Request) -> web.Response:
"""Get categorized base models.
Returns:
JSON response with categorized models
"""
try:
service = await self._base_model_service_factory()
categories = service.get_model_categories()
return web.json_response(
{
"success": True,
"data": categories,
}
)
except Exception as e:
logger.error(f"Error in get_base_model_categories: {e}")
return web.json_response(
{"success": False, "error": str(e)},
status=500,
)
async def get_base_model_cache_status(self, request: web.Request) -> web.Response:
"""Get cache status for base models.
Returns:
JSON response with cache status
"""
try:
service = await self._base_model_service_factory()
status = service.get_cache_status()
return web.json_response(
{
"success": True,
"data": status,
}
)
except Exception as e:
logger.error(f"Error in get_base_model_cache_status: {e}")
return web.json_response(
{"success": False, "error": str(e)},
status=500,
)

File diff suppressed because it is too large Load Diff

View File

@@ -16,9 +16,14 @@ import jinja2
from ...config import config
from ...services.download_coordinator import DownloadCoordinator
from ...services.connectivity_guard import (
OFFLINE_FRIENDLY_MESSAGE,
is_expected_offline_error,
)
from ...services.metadata_sync_service import MetadataSyncService
from ...services.model_file_service import ModelMoveService
from ...services.preview_asset_service import PreviewAssetService
from ...services.service_registry import ServiceRegistry
from ...services.settings_manager import SettingsManager, get_settings_manager
from ...services.tag_update_service import TagUpdateService
from ...services.use_cases import (
@@ -64,7 +69,23 @@ class ModelPageView:
self._settings = settings_service
self._server_i18n = server_i18n
self._logger = logger
self._app_version = self._get_app_version()
def _load_supporters(self) -> dict:
"""Load supporters data from JSON file."""
try:
current_file = os.path.abspath(__file__)
root_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(current_file)))
)
supporters_path = os.path.join(root_dir, "data", "supporters.json")
if os.path.exists(supporters_path):
with open(supporters_path, "r", encoding="utf-8") as f:
return json.load(f)
except Exception as e:
self._logger.debug(f"Failed to load supporters data: {e}")
return {"specialThanks": [], "allSupporters": [], "totalCount": 0}
def _get_app_version(self) -> str:
version = "1.0.0"
@@ -138,7 +159,7 @@ class ModelPageView:
"request": request,
"folders": [],
"t": self._server_i18n.get_translation,
"version": self._app_version,
"version": self._get_app_version(),
}
if not is_initializing:
@@ -207,6 +228,42 @@ class ModelListingHandler:
)
return web.json_response({"error": str(exc)}, status=500)
async def get_excluded_models(self, request: web.Request) -> web.Response:
start_time = time.perf_counter()
try:
params = self._parse_common_params(request)
result = await self._service.get_excluded_paginated_data(**params)
format_start = time.perf_counter()
formatted_result = {
"items": [
await self._service.format_response(item)
for item in result["items"]
],
"total": result["total"],
"page": result["page"],
"page_size": result["page_size"],
"total_pages": result["total_pages"],
}
format_duration = time.perf_counter() - format_start
duration = time.perf_counter() - start_time
self._logger.debug(
"Request for %s/excluded took %.3fs (formatting: %.3fs)",
self._service.model_type,
duration,
format_duration,
)
return web.json_response(formatted_result)
except Exception as exc:
self._logger.error(
"Error retrieving excluded %ss: %s",
self._service.model_type,
exc,
exc_info=True,
)
return web.json_response({"error": str(exc)}, status=500)
def _parse_common_params(self, request: web.Request) -> Dict:
page = int(request.query.get("page", "1"))
page_size = min(int(request.query.get("page_size", "20")), 100)
@@ -292,6 +349,13 @@ class ModelListingHandler:
else:
allow_selling_generated_content = None # None means no filter applied
# Name pattern filters for LoRA Pool
name_pattern_include = request.query.getall("name_pattern_include", [])
name_pattern_exclude = request.query.getall("name_pattern_exclude", [])
name_pattern_use_regex = (
request.query.get("name_pattern_use_regex", "false").lower() == "true"
)
return {
"page": page,
"page_size": page_size,
@@ -311,6 +375,9 @@ class ModelListingHandler:
"credit_required": credit_required,
"allow_selling_generated_content": allow_selling_generated_content,
"model_types": model_types,
"name_pattern_include": name_pattern_include,
"name_pattern_exclude": name_pattern_exclude,
"name_pattern_use_regex": name_pattern_use_regex,
**self._parse_specific_params(request),
}
@@ -365,6 +432,21 @@ class ModelManagementHandler:
self._logger.error("Error excluding model: %s", exc, exc_info=True)
return web.Response(text=str(exc), status=500)
async def unexclude_model(self, request: web.Request) -> web.Response:
try:
data = await request.json()
file_path = data.get("file_path")
if not file_path:
return web.Response(text="Model path is required", status=400)
result = await self._lifecycle_service.unexclude_model(file_path)
return web.json_response(result)
except ValueError as exc:
return web.json_response({"success": False, "error": str(exc)}, status=400)
except Exception as exc:
self._logger.error("Error restoring model: %s", exc, exc_info=True)
return web.Response(text=str(exc), status=500)
async def fetch_civitai(self, request: web.Request) -> web.Response:
try:
data = await request.json()
@@ -391,12 +473,18 @@ class ModelManagementHandler:
if not sha256 or hash_status != "completed":
# For checkpoints, calculate hash on-demand
scanner = self._service.scanner
if hasattr(scanner, 'calculate_hash_for_model'):
self._logger.info(f"Lazy hash calculation triggered for {file_path}")
if hasattr(scanner, "calculate_hash_for_model"):
self._logger.info(
f"Lazy hash calculation triggered for {file_path}"
)
sha256 = await scanner.calculate_hash_for_model(file_path)
if not sha256:
return web.json_response(
{"success": False, "error": "Failed to calculate SHA256 hash"}, status=500
{
"success": False,
"error": "Failed to calculate SHA256 hash",
},
status=500,
)
# Update model_data with new hash
model_data["sha256"] = sha256
@@ -420,6 +508,11 @@ class ModelManagementHandler:
formatted_metadata = await self._service.format_response(model_data)
return web.json_response({"success": True, "metadata": formatted_metadata})
except Exception as exc:
if is_expected_offline_error(str(exc)):
return web.json_response(
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
status=503,
)
self._logger.error("Error fetching from CivitAI: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -466,6 +559,11 @@ class ModelManagementHandler:
}
)
except Exception as exc:
if is_expected_offline_error(str(exc)):
return web.json_response(
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
status=503,
)
self._logger.error("Error re-linking to CivitAI: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -524,6 +622,153 @@ class ModelManagementHandler:
self._logger.error("Error replacing preview: %s", exc, exc_info=True)
return web.Response(text=str(exc), status=500)
async def set_preview_from_url(self, request: web.Request) -> web.Response:
"""Set a preview image from a remote URL (e.g., CivitAI)."""
try:
from ...utils.civitai_utils import rewrite_preview_url
from ...services.downloader import get_downloader
data = await request.json()
model_path = data.get("model_path")
image_url = data.get("image_url")
nsfw_level = data.get("nsfw_level", 0)
if not model_path:
return web.json_response(
{"success": False, "error": "Model path is required"}, status=400
)
if not image_url:
return web.json_response(
{"success": False, "error": "Image URL is required"}, status=400
)
# Rewrite URL to use optimized rendition if it's a Civitai URL
optimized_url, was_rewritten = rewrite_preview_url(
image_url, media_type="image"
)
if was_rewritten and optimized_url:
self._logger.info(
f"Rewritten preview URL to optimized version: {optimized_url}"
)
else:
optimized_url = image_url
# Download the image using the Downloader service
self._logger.info(
f"Downloading preview from {optimized_url} for {model_path}"
)
downloader = await get_downloader()
success, preview_data, headers = await downloader.download_to_memory(
optimized_url, use_auth=False, return_headers=True
)
if not success:
return web.json_response(
{
"success": False,
"error": f"Failed to download image: {preview_data}",
},
status=502,
)
# preview_data is bytes when success is True
preview_bytes = (
preview_data
if isinstance(preview_data, bytes)
else preview_data.encode("utf-8")
)
# Determine content type from response headers
content_type = (
headers.get("Content-Type", "image/jpeg") if headers else "image/jpeg"
)
# Extract original filename from URL
original_filename = None
if "?" in image_url:
url_path = image_url.split("?")[0]
else:
url_path = image_url
original_filename = url_path.split("/")[-1] if "/" in url_path else None
result = await self._preview_service.replace_preview(
model_path=model_path,
preview_data=preview_data,
content_type=content_type,
original_filename=original_filename,
nsfw_level=nsfw_level,
update_preview_in_cache=self._service.scanner.update_preview_in_cache,
metadata_loader=self._metadata_sync.load_local_metadata,
)
return web.json_response(
{
"success": True,
"preview_url": config.get_preview_static_url(
result["preview_path"]
),
"preview_nsfw_level": result["preview_nsfw_level"],
}
)
except Exception as exc:
self._logger.error("Error setting preview from URL: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
if not image_url:
return web.json_response(
{"success": False, "error": "Image URL is required"}, status=400
)
# Download the image from the remote URL
self._logger.info(f"Downloading preview from {image_url} for {model_path}")
async with aiohttp.ClientSession() as session:
async with session.get(image_url) as response:
if response.status != 200:
return web.json_response(
{
"success": False,
"error": f"Failed to download image: HTTP {response.status}",
},
status=502,
)
content_type = response.headers.get("Content-Type", "image/jpeg")
preview_data = await response.read()
# Extract original filename from URL
original_filename = None
if "?" in image_url:
url_path = image_url.split("?")[0]
else:
url_path = image_url
original_filename = (
url_path.split("/")[-1] if "/" in url_path else None
)
result = await self._preview_service.replace_preview(
model_path=model_path,
preview_data=preview_bytes,
content_type=content_type,
original_filename=original_filename,
nsfw_level=nsfw_level,
update_preview_in_cache=self._service.scanner.update_preview_in_cache,
metadata_loader=self._metadata_sync.load_local_metadata,
)
return web.json_response(
{
"success": True,
"preview_url": config.get_preview_static_url(
result["preview_path"]
),
"preview_nsfw_level": result["preview_nsfw_level"],
}
)
except Exception as exc:
self._logger.error("Error setting preview from URL: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def save_metadata(self, request: web.Request) -> web.Response:
try:
data = await request.json()
@@ -679,7 +924,7 @@ class ModelQueryHandler:
async def get_base_models(self, request: web.Request) -> web.Response:
try:
limit = int(request.query.get("limit", "20"))
if limit < 1 or limit > 100:
if limit < 0 or limit > 100:
limit = 20
base_models = await self._service.get_base_models(limit)
return web.json_response({"success": True, "base_models": base_models})
@@ -814,9 +1059,7 @@ class ModelQueryHandler:
# Format response
group = {"hash": sha256, "models": []}
for model in sorted_models:
group["models"].append(
await self._service.format_response(model)
)
group["models"].append(await self._service.format_response(model))
# Only include groups with 2+ models after filtering
if len(group["models"]) > 1:
@@ -845,7 +1088,9 @@ class ModelQueryHandler:
"favorites_only": request.query.get("favorites_only", "").lower() == "true",
}
def _apply_duplicate_filters(self, models: List[Dict[str, Any]], filters: Dict[str, Any]) -> List[Dict[str, Any]]:
def _apply_duplicate_filters(
self, models: List[Dict[str, Any]], filters: Dict[str, Any]
) -> List[Dict[str, Any]]:
"""Apply filters to a list of models within a duplicate group."""
result = models
@@ -886,7 +1131,9 @@ class ModelQueryHandler:
return result
def _sort_duplicate_group(self, models: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
def _sort_duplicate_group(
self, models: List[Dict[str, Any]]
) -> List[Dict[str, Any]]:
"""Sort models: originals first (left), copies (with -????. pattern) last (right)."""
if len(models) <= 1:
return models
@@ -1096,8 +1343,11 @@ class ModelQueryHandler:
async def get_relative_paths(self, request: web.Request) -> web.Response:
try:
search = request.query.get("search", "").strip()
limit = min(int(request.query.get("limit", "15")), 50)
matching_paths = await self._service.search_relative_paths(search, limit)
limit = min(int(request.query.get("limit", "15")), 100)
offset = max(0, int(request.query.get("offset", "0")))
matching_paths = await self._service.search_relative_paths(
search, limit, offset
)
return web.json_response(
{"success": True, "relative_paths": matching_paths}
)
@@ -1171,10 +1421,13 @@ class ModelDownloadHandler:
data["source"] = source
if file_params_json:
import json
try:
data["file_params"] = json.loads(file_params_json)
except json.JSONDecodeError:
self._logger.warning("Invalid file_params JSON: %s", file_params_json)
self._logger.warning(
"Invalid file_params JSON: %s", file_params_json
)
loop = asyncio.get_event_loop()
future = loop.create_future()
@@ -1344,6 +1597,20 @@ class ModelCivitaiHandler:
cache = await self._service.scanner.get_cached_data()
version_index = cache.version_index
downloaded_version_ids: set[int] = set()
try:
history_service = await ServiceRegistry.get_downloaded_version_history_service()
downloaded_version_ids = set(
await history_service.get_downloaded_version_ids(
self._service.model_type,
model_id,
)
)
except Exception as exc: # pragma: no cover - defensive logging
self._logger.debug(
"Failed to load download history for CivitAI versions: %s",
exc,
)
for version in versions:
version_id = None
@@ -1360,6 +1627,9 @@ class ModelCivitaiHandler:
else None
)
version["existsLocally"] = cache_entry is not None
version["hasBeenDownloaded"] = (
version_id in downloaded_version_ids if version_id is not None else False
)
if cache_entry and isinstance(cache_entry, Mapping):
local_path = cache_entry.get("file_path")
if local_path:
@@ -1602,6 +1872,11 @@ class ModelUpdateHandler:
status=429,
)
except Exception as exc: # pragma: no cover - defensive log
if is_expected_offline_error(str(exc)):
return web.json_response(
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
status=503,
)
self._logger.error("Failed to fetch license info: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -1690,9 +1965,12 @@ class ModelUpdateHandler:
{"success": False, "error": str(exc) or "Rate limited"}, status=429
)
except Exception as exc: # pragma: no cover - defensive logging
self._logger.error(
"Failed to refresh model updates: %s", exc, exc_info=True
)
if is_expected_offline_error(str(exc)):
return web.json_response(
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
status=503,
)
self._logger.error("Failed to refresh model updates: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
serialized_records = []
@@ -1905,7 +2183,8 @@ class ModelUpdateHandler:
from dataclasses import replace
new_record = replace(
record, versions=list(version_map.values()),
record,
versions=list(version_map.values()),
)
# Optionally persist to database for caching
@@ -2077,7 +2356,7 @@ class ModelUpdateHandler:
self,
record,
*,
version_context: Optional[Dict[int, Dict[str, Optional[str]]]] = None,
version_context: Optional[Dict[int, Dict[str, Any]]] = None,
) -> Dict:
context = version_context or {}
# Check user setting for hiding early access versions
@@ -2106,7 +2385,7 @@ class ModelUpdateHandler:
@staticmethod
def _serialize_version(
version, context: Optional[Dict[str, Optional[str]]]
version, context: Optional[Dict[str, Any]]
) -> Dict:
context = context or {}
preview_override = context.get("preview_override")
@@ -2120,6 +2399,7 @@ class ModelUpdateHandler:
if version.early_access_ends_at:
try:
from datetime import datetime, timezone
ea_date = datetime.fromisoformat(
version.early_access_ends_at.replace("Z", "+00:00")
)
@@ -2127,7 +2407,7 @@ class ModelUpdateHandler:
except (ValueError, AttributeError):
# If date parsing fails, treat as active EA (conservative)
is_early_access = True
elif getattr(version, 'is_early_access', False):
elif getattr(version, "is_early_access", False):
# Fallback to basic EA flag from bulk API
is_early_access = True
@@ -2139,17 +2419,42 @@ class ModelUpdateHandler:
"sizeBytes": version.size_bytes,
"previewUrl": preview_url,
"isInLibrary": version.is_in_library,
"hasBeenDownloaded": bool(context.get("has_been_downloaded", False)),
"shouldIgnore": version.should_ignore,
"earlyAccessEndsAt": version.early_access_ends_at,
"isEarlyAccess": is_early_access,
"usageControl": version.usage_control,
"filePath": context.get("file_path"),
"fileName": context.get("file_name"),
}
async def _build_version_context(
self, record
) -> Dict[int, Dict[str, Optional[str]]]:
context: Dict[int, Dict[str, Optional[str]]] = {}
) -> Dict[int, Dict[str, Any]]:
context: Dict[int, Dict[str, Any]] = {}
downloaded_version_ids: set[int] = set()
try:
history_service = await ServiceRegistry.get_downloaded_version_history_service()
downloaded_version_ids = set(
await history_service.get_downloaded_version_ids(
record.model_type,
record.model_id,
)
)
except Exception as exc: # pragma: no cover - defensive logging
self._logger.debug(
"Failed to load download history while building version context: %s",
exc,
)
for version in record.versions:
context[version.version_id] = {
"file_path": None,
"file_name": None,
"preview_override": None,
"has_been_downloaded": version.version_id in downloaded_version_ids,
}
try:
cache = await self._service.scanner.get_cached_data()
except Exception as exc: # pragma: no cover - defensive logging
@@ -2168,16 +2473,21 @@ class ModelUpdateHandler:
cache_entry = version_index.get(version.version_id)
if isinstance(cache_entry, Mapping):
preview = cache_entry.get("preview_url")
context_entry: Dict[str, Optional[str]] = {
"file_path": cache_entry.get("file_path"),
"file_name": cache_entry.get("file_name"),
"preview_override": None,
}
context_entry = context.setdefault(
version.version_id,
{
"file_path": None,
"file_name": None,
"preview_override": None,
"has_been_downloaded": version.version_id in downloaded_version_ids,
},
)
context_entry["file_path"] = cache_entry.get("file_path")
context_entry["file_name"] = cache_entry.get("file_name")
if isinstance(preview, str) and preview:
context_entry["preview_override"] = config.get_preview_static_url(
preview
)
context[version.version_id] = context_entry
return context
@@ -2201,12 +2511,15 @@ class ModelHandlerSet:
return {
"handle_models_page": self.page_view.handle,
"get_models": self.listing.get_models,
"get_excluded_models": self.listing.get_excluded_models,
"delete_model": self.management.delete_model,
"exclude_model": self.management.exclude_model,
"unexclude_model": self.management.unexclude_model,
"fetch_civitai": self.management.fetch_civitai,
"fetch_all_civitai": self.civitai.fetch_all_civitai,
"relink_civitai": self.management.relink_civitai,
"replace_preview": self.management.replace_preview,
"set_preview_from_url": self.management.set_preview_from_url,
"save_metadata": self.management.save_metadata,
"add_tags": self.management.add_tags,
"rename_model": self.management.rename_model,

View File

@@ -1,4 +1,5 @@
"""Dedicated handler objects for recipe-related routes."""
from __future__ import annotations
import json
@@ -8,6 +9,7 @@ import re
import asyncio
import tempfile
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Awaitable, Callable, Dict, List, Mapping, Optional
from aiohttp import web
@@ -24,11 +26,12 @@ from ...services.recipes import (
RecipeValidationError,
)
from ...services.metadata_service import get_default_metadata_provider
from ...utils.civitai_utils import rewrite_preview_url
from ...utils.civitai_utils import extract_civitai_image_id, rewrite_preview_url
from ...utils.exif_utils import ExifUtils
from ...recipes.merger import GenParamsMerger
from ...recipes.enrichment import RecipeEnricher
from ...services.websocket_manager import ws_manager as default_ws_manager
from ...services.batch_import_service import BatchImportService
Logger = logging.Logger
EnsureDependenciesCallable = Callable[[], Awaitable[None]]
@@ -46,8 +49,11 @@ class RecipeHandlerSet:
management: "RecipeManagementHandler"
analysis: "RecipeAnalysisHandler"
sharing: "RecipeSharingHandler"
batch_import: "BatchImportHandler"
def to_route_mapping(self) -> Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
def to_route_mapping(
self,
) -> Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
"""Expose handler coroutines keyed by registrar handler names."""
return {
@@ -75,12 +81,18 @@ class RecipeHandlerSet:
"bulk_delete": self.management.bulk_delete,
"save_recipe_from_widget": self.management.save_recipe_from_widget,
"get_recipes_for_lora": self.query.get_recipes_for_lora,
"get_recipes_for_checkpoint": self.query.get_recipes_for_checkpoint,
"scan_recipes": self.query.scan_recipes,
"move_recipe": self.management.move_recipe,
"repair_recipes": self.management.repair_recipes,
"cancel_repair": self.management.cancel_repair,
"repair_recipe": self.management.repair_recipe,
"get_repair_progress": self.management.get_repair_progress,
"start_batch_import": self.batch_import.start_batch_import,
"get_batch_import_progress": self.batch_import.get_batch_import_progress,
"cancel_batch_import": self.batch_import.cancel_batch_import,
"start_directory_import": self.batch_import.start_directory_import,
"browse_directory": self.batch_import.browse_directory,
}
@@ -170,8 +182,10 @@ class RecipeListingHandler:
search_options = {
"title": request.query.get("search_title", "true").lower() == "true",
"tags": request.query.get("search_tags", "true").lower() == "true",
"lora_name": request.query.get("search_lora_name", "true").lower() == "true",
"lora_model": request.query.get("search_lora_model", "true").lower() == "true",
"lora_name": request.query.get("search_lora_name", "true").lower()
== "true",
"lora_model": request.query.get("search_lora_model", "true").lower()
== "true",
"prompt": request.query.get("search_prompt", "true").lower() == "true",
}
@@ -205,6 +219,7 @@ class RecipeListingHandler:
filters["tags"] = tag_filters
lora_hash = request.query.get("lora_hash")
checkpoint_hash = request.query.get("checkpoint_hash")
result = await recipe_scanner.get_paginated_data(
page=page,
@@ -214,6 +229,7 @@ class RecipeListingHandler:
filters=filters,
search_options=search_options,
lora_hash=lora_hash,
checkpoint_hash=checkpoint_hash,
folder=folder,
recursive=recursive,
)
@@ -246,7 +262,9 @@ class RecipeListingHandler:
return web.json_response({"error": "Recipe not found"}, status=404)
return web.json_response(recipe)
except Exception as exc:
self._logger.error("Error retrieving recipe details: %s", exc, exc_info=True)
self._logger.error(
"Error retrieving recipe details: %s", exc, exc_info=True
)
return web.json_response({"error": str(exc)}, status=500)
def format_recipe_file_url(self, file_path: str) -> str:
@@ -256,7 +274,9 @@ class RecipeListingHandler:
if static_url:
return static_url
except Exception as exc: # pragma: no cover - logging path
self._logger.error("Error formatting recipe file URL: %s", exc, exc_info=True)
self._logger.error(
"Error formatting recipe file URL: %s", exc, exc_info=True
)
return "/loras_static/images/no-preview.png"
return "/loras_static/images/no-preview.png"
@@ -293,7 +313,9 @@ class RecipeQueryHandler:
for tag in recipe.get("tags", []) or []:
tag_counts[tag] = tag_counts.get(tag, 0) + 1
sorted_tags = [{"tag": tag, "count": count} for tag, count in tag_counts.items()]
sorted_tags = [
{"tag": tag, "count": count} for tag, count in tag_counts.items()
]
sorted_tags.sort(key=lambda entry: entry["count"], reverse=True)
return web.json_response({"success": True, "tags": sorted_tags[:limit]})
except Exception as exc:
@@ -307,16 +329,24 @@ class RecipeQueryHandler:
if recipe_scanner is None:
raise RuntimeError("Recipe scanner unavailable")
limit = int(request.query.get("limit", "20"))
cache = await recipe_scanner.get_cached_data()
base_model_counts: Dict[str, int] = {}
for recipe in getattr(cache, "raw_data", []):
base_model = recipe.get("base_model")
if base_model:
base_model_counts[base_model] = base_model_counts.get(base_model, 0) + 1
base_model_counts[base_model] = (
base_model_counts.get(base_model, 0) + 1
)
sorted_models = [{"name": model, "count": count} for model, count in base_model_counts.items()]
sorted_models = [
{"name": model, "count": count}
for model, count in base_model_counts.items()
]
sorted_models.sort(key=lambda entry: entry["count"], reverse=True)
if limit > 0:
sorted_models = sorted_models[:limit]
return web.json_response({"success": True, "base_models": sorted_models})
except Exception as exc:
self._logger.error("Error retrieving base models: %s", exc, exc_info=True)
@@ -345,7 +375,9 @@ class RecipeQueryHandler:
folders = await recipe_scanner.get_folders()
return web.json_response({"success": True, "folders": folders})
except Exception as exc:
self._logger.error("Error retrieving recipe folders: %s", exc, exc_info=True)
self._logger.error(
"Error retrieving recipe folders: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_folder_tree(self, request: web.Request) -> web.Response:
@@ -358,7 +390,9 @@ class RecipeQueryHandler:
folder_tree = await recipe_scanner.get_folder_tree()
return web.json_response({"success": True, "tree": folder_tree})
except Exception as exc:
self._logger.error("Error retrieving recipe folder tree: %s", exc, exc_info=True)
self._logger.error(
"Error retrieving recipe folder tree: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_unified_folder_tree(self, request: web.Request) -> web.Response:
@@ -371,7 +405,9 @@ class RecipeQueryHandler:
folder_tree = await recipe_scanner.get_folder_tree()
return web.json_response({"success": True, "tree": folder_tree})
except Exception as exc:
self._logger.error("Error retrieving unified recipe folder tree: %s", exc, exc_info=True)
self._logger.error(
"Error retrieving unified recipe folder tree: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_recipes_for_lora(self, request: web.Request) -> web.Response:
@@ -383,7 +419,9 @@ class RecipeQueryHandler:
lora_hash = request.query.get("hash")
if not lora_hash:
return web.json_response({"success": False, "error": "Lora hash is required"}, status=400)
return web.json_response(
{"success": False, "error": "Lora hash is required"}, status=400
)
matching_recipes = await recipe_scanner.get_recipes_for_lora(lora_hash)
return web.json_response({"success": True, "recipes": matching_recipes})
@@ -391,6 +429,28 @@ class RecipeQueryHandler:
self._logger.error("Error getting recipes for Lora: %s", exc)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_recipes_for_checkpoint(self, request: web.Request) -> web.Response:
try:
await self._ensure_dependencies_ready()
recipe_scanner = self._recipe_scanner_getter()
if recipe_scanner is None:
raise RuntimeError("Recipe scanner unavailable")
checkpoint_hash = request.query.get("hash")
if not checkpoint_hash:
return web.json_response(
{"success": False, "error": "Checkpoint hash is required"},
status=400,
)
matching_recipes = await recipe_scanner.get_recipes_for_checkpoint(
checkpoint_hash
)
return web.json_response({"success": True, "recipes": matching_recipes})
except Exception as exc:
self._logger.error("Error getting recipes for checkpoint: %s", exc)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def scan_recipes(self, request: web.Request) -> web.Response:
try:
await self._ensure_dependencies_ready()
@@ -400,7 +460,9 @@ class RecipeQueryHandler:
self._logger.info("Manually triggering recipe cache rebuild")
await recipe_scanner.get_cached_data(force_refresh=True)
return web.json_response({"success": True, "message": "Recipe cache refreshed successfully"})
return web.json_response(
{"success": True, "message": "Recipe cache refreshed successfully"}
)
except Exception as exc:
self._logger.error("Error refreshing recipe cache: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -429,7 +491,9 @@ class RecipeQueryHandler:
"id": recipe.get("id"),
"title": recipe.get("title"),
"file_url": recipe.get("file_url")
or self._format_recipe_file_url(recipe.get("file_path", "")),
or self._format_recipe_file_url(
recipe.get("file_path", "")
),
"modified": recipe.get("modified"),
"created_date": recipe.get("created_date"),
"lora_count": len(recipe.get("loras", [])),
@@ -437,7 +501,9 @@ class RecipeQueryHandler:
)
if len(recipes) >= 2:
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
recipes.sort(
key=lambda entry: entry.get("modified", 0), reverse=True
)
response_data.append(
{
"type": "fingerprint",
@@ -460,7 +526,9 @@ class RecipeQueryHandler:
"id": recipe.get("id"),
"title": recipe.get("title"),
"file_url": recipe.get("file_url")
or self._format_recipe_file_url(recipe.get("file_path", "")),
or self._format_recipe_file_url(
recipe.get("file_path", "")
),
"modified": recipe.get("modified"),
"created_date": recipe.get("created_date"),
"lora_count": len(recipe.get("loras", [])),
@@ -468,7 +536,9 @@ class RecipeQueryHandler:
)
if len(recipes) >= 2:
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
recipes.sort(
key=lambda entry: entry.get("modified", 0), reverse=True
)
response_data.append(
{
"type": "source_url",
@@ -479,9 +549,13 @@ class RecipeQueryHandler:
)
response_data.sort(key=lambda entry: entry["count"], reverse=True)
return web.json_response({"success": True, "duplicate_groups": response_data})
return web.json_response(
{"success": True, "duplicate_groups": response_data}
)
except Exception as exc:
self._logger.error("Error finding duplicate recipes: %s", exc, exc_info=True)
self._logger.error(
"Error finding duplicate recipes: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_recipe_syntax(self, request: web.Request) -> web.Response:
@@ -498,9 +572,13 @@ class RecipeQueryHandler:
return web.json_response({"error": "Recipe not found"}, status=404)
if not syntax_parts:
return web.json_response({"error": "No LoRAs found in this recipe"}, status=400)
return web.json_response(
{"error": "No LoRAs found in this recipe"}, status=400
)
return web.json_response({"success": True, "syntax": " ".join(syntax_parts)})
return web.json_response(
{"success": True, "syntax": " ".join(syntax_parts)}
)
except Exception as exc:
self._logger.error("Error generating recipe syntax: %s", exc, exc_info=True)
return web.json_response({"error": str(exc)}, status=500)
@@ -561,11 +639,17 @@ class RecipeManagementHandler:
await self._ensure_dependencies_ready()
recipe_scanner = self._recipe_scanner_getter()
if recipe_scanner is None:
return web.json_response({"success": False, "error": "Recipe scanner unavailable"}, status=503)
return web.json_response(
{"success": False, "error": "Recipe scanner unavailable"},
status=503,
)
# Check if already running
if self._ws_manager.is_recipe_repair_running():
return web.json_response({"success": False, "error": "Recipe repair already in progress"}, status=409)
return web.json_response(
{"success": False, "error": "Recipe repair already in progress"},
status=409,
)
recipe_scanner.reset_cancellation()
@@ -579,11 +663,12 @@ class RecipeManagementHandler:
progress_callback=progress_callback
)
except Exception as e:
self._logger.error(f"Error in recipe repair task: {e}", exc_info=True)
await self._ws_manager.broadcast_recipe_repair_progress({
"status": "error",
"error": str(e)
})
self._logger.error(
f"Error in recipe repair task: {e}", exc_info=True
)
await self._ws_manager.broadcast_recipe_repair_progress(
{"status": "error", "error": str(e)}
)
finally:
# Keep the final status for a while so the UI can see it
await asyncio.sleep(5)
@@ -593,7 +678,9 @@ class RecipeManagementHandler:
asyncio.create_task(run_repair())
return web.json_response({"success": True, "message": "Recipe repair started"})
return web.json_response(
{"success": True, "message": "Recipe repair started"}
)
except Exception as exc:
self._logger.error("Error starting recipe repair: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -603,10 +690,15 @@ class RecipeManagementHandler:
await self._ensure_dependencies_ready()
recipe_scanner = self._recipe_scanner_getter()
if recipe_scanner is None:
return web.json_response({"success": False, "error": "Recipe scanner unavailable"}, status=503)
return web.json_response(
{"success": False, "error": "Recipe scanner unavailable"},
status=503,
)
recipe_scanner.cancel_task()
return web.json_response({"success": True, "message": "Cancellation requested"})
return web.json_response(
{"success": True, "message": "Cancellation requested"}
)
except Exception as exc:
self._logger.error("Error cancelling recipe repair: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
@@ -616,7 +708,10 @@ class RecipeManagementHandler:
await self._ensure_dependencies_ready()
recipe_scanner = self._recipe_scanner_getter()
if recipe_scanner is None:
return web.json_response({"success": False, "error": "Recipe scanner unavailable"}, status=503)
return web.json_response(
{"success": False, "error": "Recipe scanner unavailable"},
status=503,
)
recipe_id = request.match_info["recipe_id"]
result = await recipe_scanner.repair_recipe_by_id(recipe_id)
@@ -632,12 +727,13 @@ class RecipeManagementHandler:
progress = self._ws_manager.get_recipe_repair_progress()
if progress:
return web.json_response({"success": True, "progress": progress})
return web.json_response({"success": False, "message": "No repair in progress"}, status=404)
return web.json_response(
{"success": False, "message": "No repair in progress"}, status=404
)
except Exception as exc:
self._logger.error("Error getting repair progress: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def import_remote_recipe(self, request: web.Request) -> web.Response:
try:
await self._ensure_dependencies_ready()
@@ -658,15 +754,25 @@ class RecipeManagementHandler:
if not resources_raw:
raise RecipeValidationError("Missing required field: resources")
checkpoint_entry, lora_entries = self._parse_resources_payload(resources_raw)
checkpoint_entry, lora_entries = self._parse_resources_payload(
resources_raw
)
gen_params_request = self._parse_gen_params(params.get("gen_params"))
self._logger.info(
"Remote recipe import received: url=%s, request_gen_params_keys=%s, lora_count=%d, checkpoint_keys=%s",
image_url,
sorted(gen_params_request.keys()) if gen_params_request else [],
len(lora_entries),
sorted(checkpoint_entry.keys()) if isinstance(checkpoint_entry, dict) else [],
)
# 2. Initial Metadata Construction
metadata: Dict[str, Any] = {
"base_model": params.get("base_model", "") or "",
"loras": lora_entries,
"gen_params": gen_params_request or {},
"source_url": image_url
"source_url": image_url,
}
source_path = params.get("source_path")
@@ -681,14 +787,20 @@ class RecipeManagementHandler:
# Try to resolve base model from checkpoint if not explicitly provided
if not metadata["base_model"]:
base_model_from_metadata = await self._resolve_base_model_from_checkpoint(checkpoint_entry)
base_model_from_metadata = (
await self._resolve_base_model_from_checkpoint(checkpoint_entry)
)
if base_model_from_metadata:
metadata["base_model"] = base_model_from_metadata
tags = self._parse_tags(params.get("tags"))
# 3. Download Image
image_bytes, extension, civitai_meta_from_download = await self._download_remote_media(image_url)
(
image_bytes,
extension,
civitai_meta_from_download,
) = await self._download_remote_media(image_url)
# 4. Extract Embedded Metadata
# Note: We still extract this here because Enricher currently expects 'gen_params' to already be populated
@@ -706,16 +818,24 @@ class RecipeManagementHandler:
# Let's extract embedded metadata first
embedded_gen_params = {}
try:
with tempfile.NamedTemporaryFile(suffix=extension, delete=False) as temp_img:
with tempfile.NamedTemporaryFile(
suffix=extension, delete=False
) as temp_img:
temp_img.write(image_bytes)
temp_img_path = temp_img.name
try:
raw_embedded = ExifUtils.extract_image_metadata(temp_img_path)
if raw_embedded:
parser = self._analysis_service._recipe_parser_factory.create_parser(raw_embedded)
parser = (
self._analysis_service._recipe_parser_factory.create_parser(
raw_embedded
)
)
if parser:
parsed_embedded = await parser.parse_metadata(raw_embedded, recipe_scanner=recipe_scanner)
parsed_embedded = await parser.parse_metadata(
raw_embedded, recipe_scanner=recipe_scanner
)
if parsed_embedded and "gen_params" in parsed_embedded:
embedded_gen_params = parsed_embedded["gen_params"]
else:
@@ -724,7 +844,9 @@ class RecipeManagementHandler:
if os.path.exists(temp_img_path):
os.unlink(temp_img_path)
except Exception as exc:
self._logger.warning("Failed to extract embedded metadata during import: %s", exc)
self._logger.warning(
"Failed to extract embedded metadata during import: %s", exc
)
# Pre-populate gen_params with embedded data so Enricher treats it as the "base" layer
if embedded_gen_params:
@@ -739,7 +861,7 @@ class RecipeManagementHandler:
await RecipeEnricher.enrich_recipe(
recipe=metadata,
civitai_client=civitai_client,
request_params=gen_params_request # Pass explicit request params here to override
request_params=gen_params_request, # Pass explicit request params here to override
)
# If we got civitai_meta from download but Enricher didn't fetch it (e.g. not a civitai URL or failed),
@@ -762,7 +884,9 @@ class RecipeManagementHandler:
except RecipeDownloadError as exc:
return web.json_response({"error": str(exc)}, status=400)
except Exception as exc:
self._logger.error("Error importing recipe from remote source: %s", exc, exc_info=True)
self._logger.error(
"Error importing recipe from remote source: %s", exc, exc_info=True
)
return web.json_response({"error": str(exc)}, status=500)
async def delete_recipe(self, request: web.Request) -> web.Response:
@@ -816,7 +940,11 @@ class RecipeManagementHandler:
target_path = data.get("target_path")
if not recipe_id or not target_path:
return web.json_response(
{"success": False, "error": "recipe_id and target_path are required"}, status=400
{
"success": False,
"error": "recipe_id and target_path are required",
},
status=400,
)
result = await self._persistence_service.move_recipe(
@@ -845,7 +973,11 @@ class RecipeManagementHandler:
target_path = data.get("target_path")
if not recipe_ids or not target_path:
return web.json_response(
{"success": False, "error": "recipe_ids and target_path are required"}, status=400
{
"success": False,
"error": "recipe_ids and target_path are required",
},
status=400,
)
result = await self._persistence_service.move_recipes_bulk(
@@ -934,7 +1066,9 @@ class RecipeManagementHandler:
except RecipeValidationError as exc:
return web.json_response({"error": str(exc)}, status=400)
except Exception as exc:
self._logger.error("Error saving recipe from widget: %s", exc, exc_info=True)
self._logger.error(
"Error saving recipe from widget: %s", exc, exc_info=True
)
return web.json_response({"error": str(exc)}, status=500)
async def _parse_save_payload(self, reader) -> dict[str, Any]:
@@ -1006,7 +1140,9 @@ class RecipeManagementHandler:
raise RecipeValidationError("gen_params payload must be an object")
return parsed
def _parse_resources_payload(self, payload_raw: str) -> tuple[Optional[Dict[str, Any]], List[Dict[str, Any]]]:
def _parse_resources_payload(
self, payload_raw: str
) -> tuple[Optional[Dict[str, Any]], List[Dict[str, Any]]]:
try:
payload = json.loads(payload_raw)
except json.JSONDecodeError as exc:
@@ -1063,13 +1199,19 @@ class RecipeManagementHandler:
temp_path = temp_file.name
download_url = image_url
image_info = None
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", image_url)
if civitai_match:
civitai_image_id = extract_civitai_image_id(image_url)
if civitai_image_id:
if civitai_client is None:
raise RecipeDownloadError("Civitai client unavailable for image download")
image_info = await civitai_client.get_image_info(civitai_match.group(1))
raise RecipeDownloadError(
"Civitai client unavailable for image download"
)
image_info = await civitai_client.get_image_info(
civitai_image_id, source_url=image_url
)
if not image_info:
raise RecipeDownloadError("Failed to fetch image information from Civitai")
raise RecipeDownloadError(
"Failed to fetch image information from Civitai"
)
media_url = image_info.get("url")
if not media_url:
@@ -1083,18 +1225,24 @@ class RecipeManagementHandler:
else:
download_url = media_url
success, result = await downloader.download_file(download_url, temp_path, use_auth=False)
success, result = await downloader.download_file(
download_url, temp_path, use_auth=False
)
if not success:
raise RecipeDownloadError(f"Failed to download image: {result}")
# Extract extension from URL
url_path = download_url.split('?')[0].split('#')[0]
url_path = download_url.split("?")[0].split("#")[0]
extension = os.path.splitext(url_path)[1].lower()
if not extension:
extension = ".webp" # Default to webp if unknown
extension = ".webp" # Default to webp if unknown
with open(temp_path, "rb") as file_obj:
return file_obj.read(), extension, image_info.get("meta") if civitai_match and image_info else None
return (
file_obj.read(),
extension,
image_info.get("meta") if civitai_image_id and image_info else None,
)
except RecipeDownloadError:
raise
except RecipeValidationError:
@@ -1108,14 +1256,15 @@ class RecipeManagementHandler:
except FileNotFoundError:
pass
def _safe_int(self, value: Any) -> int:
try:
return int(value)
except (TypeError, ValueError):
return 0
async def _resolve_base_model_from_checkpoint(self, checkpoint_entry: Dict[str, Any]) -> str:
async def _resolve_base_model_from_checkpoint(
self, checkpoint_entry: Dict[str, Any]
) -> str:
version_id = self._safe_int(checkpoint_entry.get("modelVersionId"))
if not version_id:
@@ -1134,7 +1283,9 @@ class RecipeManagementHandler:
base_model = version_info.get("baseModel") or ""
return str(base_model) if base_model is not None else ""
except Exception as exc: # pragma: no cover - defensive logging
self._logger.warning("Failed to resolve base model from checkpoint metadata: %s", exc)
self._logger.warning(
"Failed to resolve base model from checkpoint metadata: %s", exc
)
return ""
@@ -1279,5 +1430,311 @@ class RecipeSharingHandler:
except RecipeNotFoundError as exc:
return web.json_response({"error": str(exc)}, status=404)
except Exception as exc:
self._logger.error("Error downloading shared recipe: %s", exc, exc_info=True)
self._logger.error(
"Error downloading shared recipe: %s", exc, exc_info=True
)
return web.json_response({"error": str(exc)}, status=500)
class BatchImportHandler:
"""Handle batch import operations for recipes."""
def __init__(
self,
*,
ensure_dependencies_ready: EnsureDependenciesCallable,
recipe_scanner_getter: RecipeScannerGetter,
civitai_client_getter: CivitaiClientGetter,
logger: Logger,
batch_import_service: BatchImportService,
) -> None:
self._ensure_dependencies_ready = ensure_dependencies_ready
self._recipe_scanner_getter = recipe_scanner_getter
self._civitai_client_getter = civitai_client_getter
self._logger = logger
self._batch_import_service = batch_import_service
async def start_batch_import(self, request: web.Request) -> web.Response:
try:
await self._ensure_dependencies_ready()
if self._batch_import_service.is_import_running():
return web.json_response(
{"success": False, "error": "Batch import already in progress"},
status=409,
)
data = await request.json()
items = data.get("items", [])
tags = data.get("tags", [])
skip_no_metadata = data.get("skip_no_metadata", False)
if not items:
return web.json_response(
{"success": False, "error": "No items provided"},
status=400,
)
for item in items:
if not item.get("source"):
return web.json_response(
{
"success": False,
"error": "Each item must have a 'source' field",
},
status=400,
)
operation_id = await self._batch_import_service.start_batch_import(
recipe_scanner_getter=self._recipe_scanner_getter,
civitai_client_getter=self._civitai_client_getter,
items=items,
tags=tags,
skip_no_metadata=skip_no_metadata,
)
return web.json_response(
{
"success": True,
"operation_id": operation_id,
}
)
except RecipeValidationError as exc:
return web.json_response({"success": False, "error": str(exc)}, status=400)
except Exception as exc:
self._logger.error("Error starting batch import: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def start_directory_import(self, request: web.Request) -> web.Response:
try:
await self._ensure_dependencies_ready()
if self._batch_import_service.is_import_running():
return web.json_response(
{"success": False, "error": "Batch import already in progress"},
status=409,
)
data = await request.json()
directory = data.get("directory")
recursive = data.get("recursive", True)
tags = data.get("tags", [])
skip_no_metadata = data.get("skip_no_metadata", True)
if not directory:
return web.json_response(
{"success": False, "error": "Directory path is required"},
status=400,
)
operation_id = await self._batch_import_service.start_directory_import(
recipe_scanner_getter=self._recipe_scanner_getter,
civitai_client_getter=self._civitai_client_getter,
directory=directory,
recursive=recursive,
tags=tags,
skip_no_metadata=skip_no_metadata,
)
return web.json_response(
{
"success": True,
"operation_id": operation_id,
}
)
except RecipeValidationError as exc:
return web.json_response({"success": False, "error": str(exc)}, status=400)
except Exception as exc:
self._logger.error(
"Error starting directory import: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def get_batch_import_progress(self, request: web.Request) -> web.Response:
try:
operation_id = request.query.get("operation_id")
if not operation_id:
return web.json_response(
{"success": False, "error": "operation_id is required"},
status=400,
)
progress = self._batch_import_service.get_progress(operation_id)
if not progress:
return web.json_response(
{"success": False, "error": "Operation not found"},
status=404,
)
return web.json_response(
{
"success": True,
"progress": progress.to_dict(),
}
)
except Exception as exc:
self._logger.error(
"Error getting batch import progress: %s", exc, exc_info=True
)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def cancel_batch_import(self, request: web.Request) -> web.Response:
try:
data = await request.json()
operation_id = data.get("operation_id")
if not operation_id:
return web.json_response(
{"success": False, "error": "operation_id is required"},
status=400,
)
cancelled = self._batch_import_service.cancel_import(operation_id)
if not cancelled:
return web.json_response(
{
"success": False,
"error": "Operation not found or already completed",
},
status=404,
)
return web.json_response(
{"success": True, "message": "Cancellation requested"}
)
except Exception as exc:
self._logger.error("Error cancelling batch import: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)
async def browse_directory(self, request: web.Request) -> web.Response:
"""Browse a directory and return its contents (subdirectories and files)."""
try:
data = await request.json()
directory_path = data.get("path", "")
if not directory_path:
return web.json_response(
{"success": False, "error": "Directory path is required"},
status=400,
)
# Normalize the path
path = Path(directory_path).expanduser().resolve()
# Security check: ensure path is within allowed directories
# Allow common image/model directories
allowed_roots = [
Path.home(),
Path("/"), # Allow browsing from root for flexibility
]
# Check if path is within any allowed root
is_allowed = False
for root in allowed_roots:
try:
path.relative_to(root)
is_allowed = True
break
except ValueError:
continue
if not is_allowed:
return web.json_response(
{"success": False, "error": "Access denied to this directory"},
status=403,
)
if not path.exists():
return web.json_response(
{"success": False, "error": "Directory does not exist"},
status=404,
)
if not path.is_dir():
return web.json_response(
{"success": False, "error": "Path is not a directory"},
status=400,
)
# List directory contents
directories = []
image_files = []
image_extensions = {
".jpg",
".jpeg",
".png",
".gif",
".webp",
".bmp",
".tiff",
".tif",
}
try:
for item in path.iterdir():
try:
if item.is_dir():
# Skip hidden directories and common system folders
if not item.name.startswith(".") and item.name not in [
"__pycache__",
"node_modules",
]:
directories.append(
{
"name": item.name,
"path": str(item),
"is_parent": False,
}
)
elif item.is_file() and item.suffix.lower() in image_extensions:
image_files.append(
{
"name": item.name,
"path": str(item),
"size": item.stat().st_size,
}
)
except (PermissionError, OSError):
# Skip files/directories we can't access
continue
# Sort directories and files alphabetically
directories.sort(key=lambda x: x["name"].lower())
image_files.sort(key=lambda x: x["name"].lower())
# Add parent directory if not at root
parent_path = path.parent
show_parent = str(path) != str(path.root)
return web.json_response(
{
"success": True,
"current_path": str(path),
"parent_path": str(parent_path) if show_parent else None,
"directories": directories,
"image_files": image_files,
"image_count": len(image_files),
"directory_count": len(directories),
}
)
except PermissionError:
return web.json_response(
{"success": False, "error": "Permission denied"},
status=403,
)
except OSError as exc:
return web.json_response(
{"success": False, "error": f"Error reading directory: {str(exc)}"},
status=500,
)
except json.JSONDecodeError:
return web.json_response(
{"success": False, "error": "Invalid JSON"},
status=400,
)
except Exception as exc:
self._logger.error("Error browsing directory: %s", exc, exc_info=True)
return web.json_response({"success": False, "error": str(exc)}, status=500)

View File

@@ -22,10 +22,17 @@ class RouteDefinition:
MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/settings", "get_settings"),
RouteDefinition("POST", "/api/lm/settings", "update_settings"),
RouteDefinition("GET", "/api/lm/doctor/diagnostics", "get_doctor_diagnostics"),
RouteDefinition("POST", "/api/lm/doctor/repair-cache", "repair_doctor_cache"),
RouteDefinition("POST", "/api/lm/doctor/resolve-filename-conflicts", "resolve_doctor_filename_conflicts"),
RouteDefinition("POST", "/api/lm/doctor/export-bundle", "export_doctor_bundle"),
RouteDefinition("GET", "/api/lm/priority-tags", "get_priority_tags"),
RouteDefinition("GET", "/api/lm/settings/libraries", "get_settings_libraries"),
RouteDefinition("POST", "/api/lm/settings/libraries/activate", "activate_library"),
RouteDefinition("GET", "/api/lm/health-check", "health_check"),
RouteDefinition("GET", "/api/lm/supporters", "get_supporters"),
RouteDefinition("GET", "/api/lm/wildcards/search", "search_wildcards"),
RouteDefinition("POST", "/api/lm/wildcards/open-location", "open_wildcards_location"),
RouteDefinition("POST", "/api/lm/open-file-location", "open_file_location"),
RouteDefinition("POST", "/api/lm/update-usage-stats", "update_usage_stats"),
RouteDefinition("GET", "/api/lm/get-usage-stats", "get_usage_stats"),
@@ -36,13 +43,54 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("POST", "/api/lm/update-node-widget", "update_node_widget"),
RouteDefinition("GET", "/api/lm/get-registry", "get_registry"),
RouteDefinition("GET", "/api/lm/check-model-exists", "check_model_exists"),
RouteDefinition("GET", "/api/lm/check-models-exist", "check_models_exist"),
RouteDefinition(
"GET",
"/api/lm/model-version-download-status",
"get_model_version_download_status",
),
RouteDefinition(
"POST",
"/api/lm/model-version-download-status",
"set_model_version_download_status",
),
RouteDefinition(
"GET",
"/api/lm/set-model-version-download-status",
"set_model_version_download_status",
),
RouteDefinition("GET", "/api/lm/civitai/user-models", "get_civitai_user_models"),
RouteDefinition("POST", "/api/lm/download-metadata-archive", "download_metadata_archive"),
RouteDefinition("POST", "/api/lm/remove-metadata-archive", "remove_metadata_archive"),
RouteDefinition("GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"),
RouteDefinition("GET", "/api/lm/model-versions-status", "get_model_versions_status"),
RouteDefinition(
"POST", "/api/lm/download-metadata-archive", "download_metadata_archive"
),
RouteDefinition(
"POST", "/api/lm/remove-metadata-archive", "remove_metadata_archive"
),
RouteDefinition(
"GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"
),
RouteDefinition("GET", "/api/lm/backup/status", "get_backup_status"),
RouteDefinition("POST", "/api/lm/backup/export", "export_backup"),
RouteDefinition("POST", "/api/lm/backup/import", "import_backup"),
RouteDefinition("POST", "/api/lm/backup/open-location", "open_backup_location"),
RouteDefinition(
"GET", "/api/lm/model-versions-status", "get_model_versions_status"
),
RouteDefinition("POST", "/api/lm/settings/open-location", "open_settings_location"),
RouteDefinition("GET", "/api/lm/custom-words/search", "search_custom_words"),
RouteDefinition("GET", "/api/lm/example-workflows", "get_example_workflows"),
RouteDefinition(
"GET", "/api/lm/example-workflows/{filename}", "get_example_workflow"
),
# Base model management routes
RouteDefinition("GET", "/api/lm/base-models", "get_base_models"),
RouteDefinition("POST", "/api/lm/base-models/refresh", "refresh_base_models"),
RouteDefinition(
"GET", "/api/lm/base-models/categories", "get_base_model_categories"
),
RouteDefinition(
"GET", "/api/lm/base-models/cache-status", "get_base_model_cache_status"
),
)
@@ -66,7 +114,11 @@ class MiscRouteRegistrar:
definitions: Iterable[RouteDefinition] = MISC_ROUTE_DEFINITIONS,
) -> None:
for definition in definitions:
self._bind(definition.method, definition.path, handler_lookup[definition.handler_name])
self._bind(
definition.method,
definition.path,
handler_lookup[definition.handler_name],
)
def _bind(self, method: str, path: str, handler: Callable) -> None:
add_method_name = self._METHOD_MAP[method.upper()]

View File

@@ -19,9 +19,12 @@ from ..services.downloader import get_downloader
from ..utils.usage_stats import UsageStats
from .handlers.misc_handlers import (
CustomWordsHandler,
DoctorHandler,
ExampleWorkflowsHandler,
FileSystemHandler,
HealthCheckHandler,
LoraCodeHandler,
BackupHandler,
MetadataArchiveHandler,
MiscHandlerSet,
ModelExampleFilesHandler,
@@ -29,17 +32,21 @@ from .handlers.misc_handlers import (
NodeRegistry,
NodeRegistryHandler,
SettingsHandler,
SupportersHandler,
TrainedWordsHandler,
UsageStatsHandler,
WildcardsHandler,
build_service_registry_adapter,
)
from .handlers.base_model_handlers import BaseModelHandlerSet
from .misc_route_registrar import MiscRouteRegistrar
logger = logging.getLogger(__name__)
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get(
"HF_HUB_DISABLE_TELEMETRY", "0"
) == "0"
standalone_mode = (
os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"
or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
)
class MiscRoutes:
@@ -74,7 +81,9 @@ class MiscRoutes:
self._node_registry = node_registry or NodeRegistry()
self._standalone_mode = standalone_mode_flag
self._handler_mapping: Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]] | None = None
self._handler_mapping: (
Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]] | None
) = None
@staticmethod
def setup_routes(app: web.Application) -> None:
@@ -86,7 +95,9 @@ class MiscRoutes:
registrar = self._registrar_factory(app)
registrar.register_routes(self._ensure_handler_mapping())
def _ensure_handler_mapping(self) -> Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
def _ensure_handler_mapping(
self,
) -> Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
if self._handler_mapping is None:
handler_set = self._create_handler_set()
self._handler_mapping = handler_set.to_route_mapping()
@@ -108,6 +119,7 @@ class MiscRoutes:
settings_service=self._settings,
metadata_provider_updater=self._metadata_provider_updater,
)
backup = BackupHandler()
filesystem = FileSystemHandler(settings_service=self._settings)
node_registry_handler = NodeRegistryHandler(
node_registry=self._node_registry,
@@ -119,6 +131,11 @@ class MiscRoutes:
metadata_provider_factory=self._metadata_provider_factory,
)
custom_words = CustomWordsHandler()
wildcards = WildcardsHandler()
supporters = SupportersHandler()
doctor = DoctorHandler(settings_service=self._settings)
example_workflows = ExampleWorkflowsHandler()
base_model = BaseModelHandlerSet()
return self._handler_set_factory(
health=health,
@@ -130,8 +147,14 @@ class MiscRoutes:
node_registry=node_registry_handler,
model_library=model_library,
metadata_archive=metadata_archive,
backup=backup,
filesystem=filesystem,
custom_words=custom_words,
wildcards=wildcards,
supporters=supporters,
doctor=doctor,
example_workflows=example_workflows,
base_model=base_model,
)

View File

@@ -1,4 +1,5 @@
"""Route registrar for model endpoints."""
from __future__ import annotations
from dataclasses import dataclass
@@ -21,12 +22,17 @@ class RouteDefinition:
COMMON_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/{prefix}/list", "get_models"),
RouteDefinition("GET", "/api/lm/{prefix}/excluded", "get_excluded_models"),
RouteDefinition("POST", "/api/lm/{prefix}/delete", "delete_model"),
RouteDefinition("POST", "/api/lm/{prefix}/exclude", "exclude_model"),
RouteDefinition("POST", "/api/lm/{prefix}/unexclude", "unexclude_model"),
RouteDefinition("POST", "/api/lm/{prefix}/fetch-civitai", "fetch_civitai"),
RouteDefinition("POST", "/api/lm/{prefix}/fetch-all-civitai", "fetch_all_civitai"),
RouteDefinition("POST", "/api/lm/{prefix}/relink-civitai", "relink_civitai"),
RouteDefinition("POST", "/api/lm/{prefix}/replace-preview", "replace_preview"),
RouteDefinition(
"POST", "/api/lm/{prefix}/set-preview-from-url", "set_preview_from_url"
),
RouteDefinition("POST", "/api/lm/{prefix}/save-metadata", "save_metadata"),
RouteDefinition("POST", "/api/lm/{prefix}/add-tags", "add_tags"),
RouteDefinition("POST", "/api/lm/{prefix}/rename", "rename_model"),
@@ -36,7 +42,9 @@ COMMON_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("POST", "/api/lm/{prefix}/move_models_bulk", "move_models_bulk"),
RouteDefinition("GET", "/api/lm/{prefix}/auto-organize", "auto_organize_models"),
RouteDefinition("POST", "/api/lm/{prefix}/auto-organize", "auto_organize_models"),
RouteDefinition("GET", "/api/lm/{prefix}/auto-organize-progress", "get_auto_organize_progress"),
RouteDefinition(
"GET", "/api/lm/{prefix}/auto-organize-progress", "get_auto_organize_progress"
),
RouteDefinition("GET", "/api/lm/{prefix}/top-tags", "get_top_tags"),
RouteDefinition("GET", "/api/lm/{prefix}/base-models", "get_base_models"),
RouteDefinition("GET", "/api/lm/{prefix}/model-types", "get_model_types"),
@@ -44,30 +52,60 @@ COMMON_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/{prefix}/roots", "get_model_roots"),
RouteDefinition("GET", "/api/lm/{prefix}/folders", "get_folders"),
RouteDefinition("GET", "/api/lm/{prefix}/folder-tree", "get_folder_tree"),
RouteDefinition("GET", "/api/lm/{prefix}/unified-folder-tree", "get_unified_folder_tree"),
RouteDefinition(
"GET", "/api/lm/{prefix}/unified-folder-tree", "get_unified_folder_tree"
),
RouteDefinition("GET", "/api/lm/{prefix}/find-duplicates", "find_duplicate_models"),
RouteDefinition("GET", "/api/lm/{prefix}/find-filename-conflicts", "find_filename_conflicts"),
RouteDefinition(
"GET", "/api/lm/{prefix}/find-filename-conflicts", "find_filename_conflicts"
),
RouteDefinition("GET", "/api/lm/{prefix}/get-notes", "get_model_notes"),
RouteDefinition("GET", "/api/lm/{prefix}/preview-url", "get_model_preview_url"),
RouteDefinition("GET", "/api/lm/{prefix}/civitai-url", "get_model_civitai_url"),
RouteDefinition("GET", "/api/lm/{prefix}/metadata", "get_model_metadata"),
RouteDefinition("GET", "/api/lm/{prefix}/model-description", "get_model_description"),
RouteDefinition(
"GET", "/api/lm/{prefix}/model-description", "get_model_description"
),
RouteDefinition("GET", "/api/lm/{prefix}/relative-paths", "get_relative_paths"),
RouteDefinition("GET", "/api/lm/{prefix}/civitai/versions/{model_id}", "get_civitai_versions"),
RouteDefinition("GET", "/api/lm/{prefix}/civitai/model/version/{modelVersionId}", "get_civitai_model_by_version"),
RouteDefinition("GET", "/api/lm/{prefix}/civitai/model/hash/{hash}", "get_civitai_model_by_hash"),
RouteDefinition("POST", "/api/lm/{prefix}/updates/refresh", "refresh_model_updates"),
RouteDefinition("POST", "/api/lm/{prefix}/updates/fetch-missing-license", "fetch_missing_civitai_license_data"),
RouteDefinition("POST", "/api/lm/{prefix}/updates/ignore", "set_model_update_ignore"),
RouteDefinition("POST", "/api/lm/{prefix}/updates/ignore-version", "set_version_update_ignore"),
RouteDefinition("GET", "/api/lm/{prefix}/updates/status/{model_id}", "get_model_update_status"),
RouteDefinition("GET", "/api/lm/{prefix}/updates/versions/{model_id}", "get_model_versions"),
RouteDefinition(
"GET", "/api/lm/{prefix}/civitai/versions/{model_id}", "get_civitai_versions"
),
RouteDefinition(
"GET",
"/api/lm/{prefix}/civitai/model/version/{modelVersionId}",
"get_civitai_model_by_version",
),
RouteDefinition(
"GET", "/api/lm/{prefix}/civitai/model/hash/{hash}", "get_civitai_model_by_hash"
),
RouteDefinition(
"POST", "/api/lm/{prefix}/updates/refresh", "refresh_model_updates"
),
RouteDefinition(
"POST",
"/api/lm/{prefix}/updates/fetch-missing-license",
"fetch_missing_civitai_license_data",
),
RouteDefinition(
"POST", "/api/lm/{prefix}/updates/ignore", "set_model_update_ignore"
),
RouteDefinition(
"POST", "/api/lm/{prefix}/updates/ignore-version", "set_version_update_ignore"
),
RouteDefinition(
"GET", "/api/lm/{prefix}/updates/status/{model_id}", "get_model_update_status"
),
RouteDefinition(
"GET", "/api/lm/{prefix}/updates/versions/{model_id}", "get_model_versions"
),
RouteDefinition("POST", "/api/lm/download-model", "download_model"),
RouteDefinition("GET", "/api/lm/download-model-get", "download_model_get"),
RouteDefinition("GET", "/api/lm/cancel-download-get", "cancel_download_get"),
RouteDefinition("GET", "/api/lm/pause-download", "pause_download_get"),
RouteDefinition("GET", "/api/lm/resume-download", "resume_download_get"),
RouteDefinition("GET", "/api/lm/download-progress/{download_id}", "get_download_progress"),
RouteDefinition(
"GET", "/api/lm/download-progress/{download_id}", "get_download_progress"
),
RouteDefinition("POST", "/api/lm/{prefix}/cancel-task", "cancel_task"),
RouteDefinition("GET", "/{prefix}", "handle_models_page"),
)
@@ -94,12 +132,18 @@ class ModelRouteRegistrar:
definitions: Iterable[RouteDefinition] = COMMON_ROUTE_DEFINITIONS,
) -> None:
for definition in definitions:
self._bind_route(definition.method, definition.build_path(prefix), handler_lookup[definition.handler_name])
self._bind_route(
definition.method,
definition.build_path(prefix),
handler_lookup[definition.handler_name],
)
def add_route(self, method: str, path: str, handler: Callable) -> None:
self._bind_route(method, path, handler)
def add_prefixed_route(self, method: str, path_template: str, prefix: str, handler: Callable) -> None:
def add_prefixed_route(
self, method: str, path_template: str, prefix: str, handler: Callable
) -> None:
self._bind_route(method, path_template.replace("{prefix}", prefix), handler)
def _bind_route(self, method: str, path: str, handler: Callable) -> None:

View File

@@ -1,4 +1,5 @@
"""Route registrar for recipe endpoints."""
from __future__ import annotations
from dataclasses import dataclass
@@ -22,7 +23,9 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/recipe/{recipe_id}", "get_recipe"),
RouteDefinition("GET", "/api/lm/recipes/import-remote", "import_remote_recipe"),
RouteDefinition("POST", "/api/lm/recipes/analyze-image", "analyze_uploaded_image"),
RouteDefinition("POST", "/api/lm/recipes/analyze-local-image", "analyze_local_image"),
RouteDefinition(
"POST", "/api/lm/recipes/analyze-local-image", "analyze_local_image"
),
RouteDefinition("POST", "/api/lm/recipes/save", "save_recipe"),
RouteDefinition("DELETE", "/api/lm/recipe/{recipe_id}", "delete_recipe"),
RouteDefinition("GET", "/api/lm/recipes/top-tags", "get_top_tags"),
@@ -30,9 +33,13 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/recipes/roots", "get_roots"),
RouteDefinition("GET", "/api/lm/recipes/folders", "get_folders"),
RouteDefinition("GET", "/api/lm/recipes/folder-tree", "get_folder_tree"),
RouteDefinition("GET", "/api/lm/recipes/unified-folder-tree", "get_unified_folder_tree"),
RouteDefinition(
"GET", "/api/lm/recipes/unified-folder-tree", "get_unified_folder_tree"
),
RouteDefinition("GET", "/api/lm/recipe/{recipe_id}/share", "share_recipe"),
RouteDefinition("GET", "/api/lm/recipe/{recipe_id}/share/download", "download_shared_recipe"),
RouteDefinition(
"GET", "/api/lm/recipe/{recipe_id}/share/download", "download_shared_recipe"
),
RouteDefinition("GET", "/api/lm/recipe/{recipe_id}/syntax", "get_recipe_syntax"),
RouteDefinition("PUT", "/api/lm/recipe/{recipe_id}/update", "update_recipe"),
RouteDefinition("POST", "/api/lm/recipe/move", "move_recipe"),
@@ -40,13 +47,29 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("POST", "/api/lm/recipe/lora/reconnect", "reconnect_lora"),
RouteDefinition("GET", "/api/lm/recipes/find-duplicates", "find_duplicates"),
RouteDefinition("POST", "/api/lm/recipes/bulk-delete", "bulk_delete"),
RouteDefinition("POST", "/api/lm/recipes/save-from-widget", "save_recipe_from_widget"),
RouteDefinition(
"POST", "/api/lm/recipes/save-from-widget", "save_recipe_from_widget"
),
RouteDefinition("GET", "/api/lm/recipes/for-lora", "get_recipes_for_lora"),
RouteDefinition(
"GET", "/api/lm/recipes/for-checkpoint", "get_recipes_for_checkpoint"
),
RouteDefinition("GET", "/api/lm/recipes/scan", "scan_recipes"),
RouteDefinition("POST", "/api/lm/recipes/repair", "repair_recipes"),
RouteDefinition("POST", "/api/lm/recipes/cancel-repair", "cancel_repair"),
RouteDefinition("POST", "/api/lm/recipe/{recipe_id}/repair", "repair_recipe"),
RouteDefinition("GET", "/api/lm/recipes/repair-progress", "get_repair_progress"),
RouteDefinition("POST", "/api/lm/recipes/batch-import/start", "start_batch_import"),
RouteDefinition(
"GET", "/api/lm/recipes/batch-import/progress", "get_batch_import_progress"
),
RouteDefinition(
"POST", "/api/lm/recipes/batch-import/cancel", "cancel_batch_import"
),
RouteDefinition(
"POST", "/api/lm/recipes/batch-import/directory", "start_directory_import"
),
RouteDefinition("POST", "/api/lm/recipes/browse-directory", "browse_directory"),
)
@@ -63,7 +86,9 @@ class RecipeRouteRegistrar:
def __init__(self, app: web.Application) -> None:
self._app = app
def register_routes(self, handler_lookup: Mapping[str, Callable[[web.Request], object]]) -> None:
def register_routes(
self, handler_lookup: Mapping[str, Callable[[web.Request], object]]
) -> None:
for definition in ROUTE_DEFINITIONS:
handler = handler_lookup[definition.handler_name]
self._bind_route(definition.method, definition.path, handler)

View File

@@ -209,6 +209,80 @@ class StatsRoutes:
'error': str(e)
}, status=500)
async def get_model_usage_list(self, request: web.Request) -> web.Response:
"""Get paginated model usage list for infinite scrolling"""
try:
await self.init_services()
model_type = request.query.get('type', 'lora')
sort_order = request.query.get('sort', 'desc')
try:
limit = int(request.query.get('limit', '50'))
offset = int(request.query.get('offset', '0'))
except ValueError:
limit = 50
offset = 0
# Get usage statistics
usage_data = await self.usage_stats.get_stats()
# Select proper cache and usage dict based on type
if model_type == 'lora':
cache = await self.lora_scanner.get_cached_data()
type_usage_data = usage_data.get('loras', {})
elif model_type == 'checkpoint':
cache = await self.checkpoint_scanner.get_cached_data()
type_usage_data = usage_data.get('checkpoints', {})
elif model_type == 'embedding':
cache = await self.embedding_scanner.get_cached_data()
type_usage_data = usage_data.get('embeddings', {})
else:
return web.json_response({'success': False, 'error': f"Invalid model type: {model_type}"}, status=400)
# Create list of all models
all_models = []
for item in cache.raw_data:
sha256 = item.get('sha256')
usage_info = type_usage_data.get(sha256, {}) if sha256 else {}
usage_count = usage_info.get('total', 0) if isinstance(usage_info, dict) else 0
all_models.append({
'name': item.get('model_name', 'Unknown'),
'usage_count': usage_count,
'base_model': item.get('base_model', 'Unknown'),
'preview_url': config.get_preview_static_url(item.get('preview_url', '')),
'folder': item.get('folder', '')
})
# Sort the models
reverse = (sort_order == 'desc')
all_models.sort(key=lambda x: (x['usage_count'], x['name'].lower()), reverse=reverse)
if not reverse:
# If asc, sort by usage_count ascending, but keep name ascending
all_models.sort(key=lambda x: (x['usage_count'], x['name'].lower()))
else:
all_models.sort(key=lambda x: (-x['usage_count'], x['name'].lower()))
# Slice for pagination
paginated_models = all_models[offset:offset + limit]
return web.json_response({
'success': True,
'data': {
'items': paginated_models,
'total': len(all_models),
'type': model_type
}
})
except Exception as e:
logger.error(f"Error getting model usage list: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_base_model_distribution(self, request: web.Request) -> web.Response:
"""Get base model distribution statistics"""
try:
@@ -530,6 +604,7 @@ class StatsRoutes:
# Register API routes
app.router.add_get('/api/lm/stats/collection-overview', self.get_collection_overview)
app.router.add_get('/api/lm/stats/usage-analytics', self.get_usage_analytics)
app.router.add_get('/api/lm/stats/model-usage-list', self.get_model_usage_list)
app.router.add_get('/api/lm/stats/base-model-distribution', self.get_base_model_distribution)
app.router.add_get('/api/lm/stats/tag-analytics', self.get_tag_analytics)
app.router.add_get('/api/lm/stats/storage-analytics', self.get_storage_analytics)

View File

@@ -0,0 +1,570 @@
from __future__ import annotations
import asyncio
import json
import logging
import os
import secrets
import shutil
import socket
from dataclasses import dataclass
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Optional, Tuple
import aiohttp
from .downloader import DownloadProgress, get_downloader
from .aria2_transfer_state import Aria2TransferStateStore
from .settings_manager import get_settings_manager
logger = logging.getLogger(__name__)
CIVITAI_DOWNLOAD_URL_PREFIXES = (
"https://civitai.com/api/download/",
"https://civitai.red/api/download/",
)
class Aria2Error(RuntimeError):
"""Raised when aria2 integration fails."""
@dataclass
class Aria2Transfer:
"""Track an aria2 download registered by the Python coordinator."""
gid: str
save_path: str
class Aria2Downloader:
"""Manage an aria2 RPC daemon for experimental model downloads."""
_instance = None
_lock = asyncio.Lock()
@classmethod
async def get_instance(cls) -> "Aria2Downloader":
async with cls._lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
def __init__(self) -> None:
if hasattr(self, "_initialized"):
return
self._initialized = True
self._process: Optional[asyncio.subprocess.Process] = None
self._rpc_port: Optional[int] = None
self._rpc_secret = ""
self._rpc_url = ""
self._rpc_session: Optional[aiohttp.ClientSession] = None
self._rpc_session_lock = asyncio.Lock()
self._process_lock = asyncio.Lock()
self._transfers: Dict[str, Aria2Transfer] = {}
self._poll_interval = 0.5
self._state_store = Aria2TransferStateStore()
@property
def is_running(self) -> bool:
return self._process is not None and self._process.returncode is None
async def download_file(
self,
url: str,
save_path: str,
*,
download_id: str,
progress_callback=None,
headers: Optional[Dict[str, str]] = None,
) -> Tuple[bool, str]:
"""Download a file using aria2 RPC and wait for completion."""
await self._ensure_process()
save_path = os.path.abspath(save_path)
transfer = self._transfers.get(download_id)
if transfer is None or os.path.abspath(transfer.save_path) != save_path:
gid = await self._schedule_download(
url,
save_path,
download_id=download_id,
headers=headers,
)
transfer = Aria2Transfer(gid=gid, save_path=save_path)
self._transfers[download_id] = transfer
try:
while True:
status = await self.get_status(download_id)
if status is None:
return False, "aria2 download not found"
snapshot = self._build_progress_snapshot(status)
if progress_callback is not None:
await self._dispatch_progress(progress_callback, snapshot)
state = status.get("status", "")
if state == "complete":
completed_path = self._resolve_completed_path(status, save_path)
return True, completed_path
if state == "error":
return False, status.get("errorMessage") or "aria2 download failed"
if state == "removed":
return False, "Download was cancelled"
await asyncio.sleep(self._poll_interval)
finally:
self._transfers.pop(download_id, None)
async def _schedule_download(
self,
url: str,
save_path: str,
*,
download_id: str,
headers: Optional[Dict[str, str]] = None,
) -> str:
save_dir = os.path.dirname(save_path)
out_name = os.path.basename(save_path)
Path(save_dir).mkdir(parents=True, exist_ok=True)
resolved_url = url
request_headers = headers
if headers and url.startswith(CIVITAI_DOWNLOAD_URL_PREFIXES):
resolved_url = await self._resolve_authenticated_redirect_url(url, headers)
if resolved_url != url:
request_headers = None
logger.debug(
"Resolved Civitai download %s to signed URL for aria2",
download_id,
)
options: Dict[str, str] = {
"dir": save_dir,
"out": out_name,
"continue": "true",
"max-connection-per-server": "4",
"split": "4",
"min-split-size": "1M",
"allow-overwrite": "true",
"auto-file-renaming": "false",
"file-allocation": "none",
}
if request_headers:
options["header"] = [
f"{key}: {value}" for key, value in request_headers.items()
]
logger.debug(
"Submitting aria2 download %s -> %s (auth=%s, civitai_signed=%s)",
download_id,
save_path,
bool(request_headers),
resolved_url != url,
)
try:
gid = await self._rpc_call("aria2.addUri", [[resolved_url], options])
except Exception as exc:
raise Aria2Error(f"Failed to schedule aria2 download: {exc}") from exc
logger.debug("aria2 accepted download %s with gid %s", download_id, gid)
await self._state_store.upsert(
download_id,
{
"gid": gid,
"save_path": save_path,
"status": "downloading",
"url": url,
},
)
return gid
async def get_status(self, download_id: str) -> Optional[Dict[str, Any]]:
"""Return the raw aria2 status payload for a known download."""
transfer = self._transfers.get(download_id)
if transfer is None:
return None
keys = [
"gid",
"status",
"totalLength",
"completedLength",
"downloadSpeed",
"errorMessage",
"files",
]
try:
status = await self._rpc_call("aria2.tellStatus", [transfer.gid, keys])
except Exception as exc:
raise Aria2Error(f"Failed to query aria2 download status: {exc}") from exc
if isinstance(status, dict):
return status
return None
async def get_status_by_gid(self, gid: str) -> Optional[Dict[str, Any]]:
keys = [
"gid",
"status",
"totalLength",
"completedLength",
"downloadSpeed",
"errorMessage",
"files",
]
try:
status = await self._rpc_call("aria2.tellStatus", [gid, keys])
except Exception as exc:
message = str(exc)
if "cannot be found" in message.lower() or "not found" in message.lower():
return None
raise Aria2Error(f"Failed to query aria2 download status: {exc}") from exc
if isinstance(status, dict):
return status
return None
async def restore_transfer(self, download_id: str, gid: str, save_path: str) -> None:
await self._ensure_process()
self._transfers[download_id] = Aria2Transfer(
gid=gid,
save_path=os.path.abspath(save_path),
)
async def reassign_transfer(
self, from_download_id: str, to_download_id: str
) -> Optional[Aria2Transfer]:
transfer = self._transfers.get(from_download_id)
if transfer is None:
return None
self._transfers[to_download_id] = transfer
if from_download_id != to_download_id:
self._transfers.pop(from_download_id, None)
return transfer
async def has_transfer(self, download_id: str) -> bool:
return download_id in self._transfers
async def pause_download(self, download_id: str) -> Dict[str, Any]:
transfer = self._transfers.get(download_id)
if transfer is None:
return {"success": False, "error": "Download task not found"}
try:
await self._rpc_call("aria2.forcePause", [transfer.gid])
except Exception as exc:
return {"success": False, "error": str(exc)}
await self._state_store.upsert(download_id, {"status": "paused"})
return {"success": True, "message": "Download paused successfully"}
async def resume_download(self, download_id: str) -> Dict[str, Any]:
transfer = self._transfers.get(download_id)
if transfer is None:
return {"success": False, "error": "Download task not found"}
try:
await self._rpc_call("aria2.unpause", [transfer.gid])
except Exception as exc:
return {"success": False, "error": str(exc)}
await self._state_store.upsert(download_id, {"status": "downloading"})
return {"success": True, "message": "Download resumed successfully"}
async def cancel_download(self, download_id: str) -> Dict[str, Any]:
transfer = self._transfers.get(download_id)
if transfer is None:
return {"success": False, "error": "Download task not found"}
try:
await self._rpc_call("aria2.forceRemove", [transfer.gid])
except Exception as exc:
return {"success": False, "error": str(exc)}
await self._state_store.remove(download_id)
return {"success": True, "message": "Download cancelled successfully"}
async def close(self) -> None:
"""Shut down the RPC process and session."""
if self._rpc_session is not None:
await self._rpc_session.close()
self._rpc_session = None
process = self._process
self._process = None
self._transfers.clear()
if process is None:
return
if process.returncode is None:
process.terminate()
try:
await asyncio.wait_for(process.wait(), timeout=5.0)
except asyncio.TimeoutError:
process.kill()
await process.wait()
async def _dispatch_progress(self, callback, snapshot: DownloadProgress) -> None:
try:
result = callback(snapshot, snapshot)
except TypeError:
result = callback(snapshot.percent_complete)
if asyncio.iscoroutine(result):
await result
elif hasattr(result, "__await__"):
await result
def _build_progress_snapshot(self, status: Dict[str, Any]) -> DownloadProgress:
completed = self._parse_int(status.get("completedLength"))
total = self._parse_int(status.get("totalLength"))
speed = float(self._parse_int(status.get("downloadSpeed")))
percent = 0.0
if total > 0:
percent = (completed / total) * 100.0
return DownloadProgress(
percent_complete=max(0.0, min(percent, 100.0)),
bytes_downloaded=completed,
total_bytes=total or None,
bytes_per_second=speed,
timestamp=datetime.now().timestamp(),
)
def _resolve_completed_path(self, status: Dict[str, Any], default_path: str) -> str:
files = status.get("files")
if isinstance(files, list) and files:
first = files[0]
if isinstance(first, dict):
candidate = first.get("path")
if isinstance(candidate, str) and candidate:
return candidate
return default_path
@staticmethod
def _parse_int(value: Any) -> int:
try:
return int(value)
except (TypeError, ValueError):
return 0
async def _resolve_authenticated_redirect_url(
self,
url: str,
headers: Dict[str, str],
) -> str:
downloader = await get_downloader()
session = await downloader.session
request_headers = dict(downloader.default_headers)
request_headers.update(headers)
request_headers["Accept-Encoding"] = "identity"
try:
async with session.get(
url,
headers=request_headers,
allow_redirects=False,
proxy=downloader.proxy_url,
) as response:
if response.status in {301, 302, 303, 307, 308}:
location = response.headers.get("Location")
if location:
return location
raise Aria2Error(
"Authenticated Civitai redirect did not include a Location header"
)
if response.status == 200:
return url
body = await response.text()
raise Aria2Error(
f"Failed to resolve authenticated Civitai redirect: status={response.status} body={body[:300]}"
)
except aiohttp.ClientError as exc:
raise Aria2Error(
f"Failed to resolve authenticated Civitai redirect: {exc}"
) from exc
async def _ensure_process(self) -> None:
async with self._process_lock:
if self.is_running and await self._ping():
return
await self.close()
executable = self._resolve_executable()
self._rpc_port = self._find_free_port()
self._rpc_secret = secrets.token_hex(16)
self._rpc_url = f"http://127.0.0.1:{self._rpc_port}/jsonrpc"
command = [
executable,
"--enable-rpc=true",
"--rpc-listen-all=false",
f"--rpc-listen-port={self._rpc_port}",
f"--rpc-secret={self._rpc_secret}",
"--check-certificate=true",
"--allow-overwrite=true",
"--auto-file-renaming=false",
"--file-allocation=none",
"--max-concurrent-downloads=5",
"--continue=true",
"--daemon=false",
"--quiet=true",
f"--stop-with-process={os.getpid()}",
]
logger.info("Starting aria2 RPC daemon from %s", executable)
self._process = await asyncio.create_subprocess_exec(
*command,
stdout=asyncio.subprocess.DEVNULL,
stderr=asyncio.subprocess.PIPE,
)
await self._wait_until_ready()
def _resolve_executable(self) -> str:
settings = get_settings_manager()
configured_path = (settings.get("aria2c_path") or "").strip()
candidate = configured_path or "aria2c"
resolved = shutil.which(candidate)
if resolved:
return resolved
if configured_path and os.path.isfile(configured_path) and os.access(
configured_path, os.X_OK
):
return configured_path
raise Aria2Error(
"aria2c executable was not found. Install aria2 or configure aria2c_path."
)
async def _wait_until_ready(self) -> None:
assert self._process is not None
start_time = asyncio.get_running_loop().time()
last_error = ""
while asyncio.get_running_loop().time() - start_time < 10.0:
if self._process.returncode is not None:
stderr_output = ""
if self._process.stderr is not None:
try:
stderr_output = (
await asyncio.wait_for(self._process.stderr.read(), timeout=0.2)
).decode("utf-8", errors="replace")
except Exception:
stderr_output = ""
raise Aria2Error(
f"aria2 RPC process exited early with code {self._process.returncode}: {stderr_output.strip()}"
)
try:
if await self._ping():
return
except Exception as exc: # pragma: no cover - startup race
last_error = str(exc)
await asyncio.sleep(0.2)
raise Aria2Error(
f"Timed out waiting for aria2 RPC to become ready{': ' + last_error if last_error else ''}"
)
async def _ping(self) -> bool:
try:
result = await self._rpc_call("aria2.getVersion", [])
except Exception:
return False
return isinstance(result, dict)
async def _rpc_call(self, method: str, params: list[Any]) -> Any:
if not self._rpc_url:
raise Aria2Error("aria2 RPC endpoint is not initialized")
session = await self._get_rpc_session()
payload = {
"jsonrpc": "2.0",
"id": secrets.token_hex(8),
"method": method,
"params": [f"token:{self._rpc_secret}", *params],
}
async with session.post(self._rpc_url, json=payload) as response:
text = await response.text()
try:
body = json.loads(text)
except json.JSONDecodeError:
body = None
if body is None:
if response.status != 200:
raise Aria2Error(
f"aria2 RPC returned status {response.status} with non-JSON body: {text}"
)
raise Aria2Error(f"Invalid aria2 RPC response: {text}")
if "error" in body:
error = body["error"] or {}
code = error.get("code") if isinstance(error, dict) else None
message = error.get("message") if isinstance(error, dict) else str(error)
logger.error(
"aria2 RPC %s failed with HTTP %s, code=%s, message=%s",
method,
response.status,
code,
message,
)
status_message = (
f"aria2 RPC {method} failed with status {response.status}: {message}"
if response.status != 200
else message
)
raise Aria2Error(status_message or "Unknown aria2 RPC error")
if response.status != 200:
logger.error(
"aria2 RPC %s returned unexpected HTTP status %s without error payload: %s",
method,
response.status,
body,
)
raise Aria2Error(
f"aria2 RPC {method} returned unexpected status {response.status}"
)
return body.get("result")
async def _get_rpc_session(self) -> aiohttp.ClientSession:
if self._rpc_session is None or self._rpc_session.closed:
async with self._rpc_session_lock:
if self._rpc_session is None or self._rpc_session.closed:
timeout = aiohttp.ClientTimeout(total=30)
self._rpc_session = aiohttp.ClientSession(timeout=timeout)
return self._rpc_session
@staticmethod
def _find_free_port() -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.bind(("127.0.0.1", 0))
sock.listen(1)
return int(sock.getsockname()[1])
async def get_aria2_downloader() -> Aria2Downloader:
"""Get the singleton aria2 downloader."""
return await Aria2Downloader.get_instance()

View File

@@ -0,0 +1,108 @@
from __future__ import annotations
import asyncio
import json
import os
from copy import deepcopy
from typing import Any, Dict, Optional
from ..utils.cache_paths import get_cache_base_dir
def get_aria2_state_path() -> str:
base_dir = get_cache_base_dir(create=True)
state_dir = os.path.join(base_dir, "aria2")
os.makedirs(state_dir, exist_ok=True)
return os.path.join(state_dir, "downloads.json")
class Aria2TransferStateStore:
"""Persist aria2 transfer metadata needed for restart recovery."""
_locks_by_path: Dict[str, asyncio.Lock] = {}
def __init__(self, state_path: Optional[str] = None) -> None:
self._state_path = os.path.abspath(state_path or get_aria2_state_path())
self._lock = self._locks_by_path.setdefault(self._state_path, asyncio.Lock())
def _read_all_unlocked(self) -> Dict[str, Dict[str, Any]]:
try:
with open(self._state_path, "r", encoding="utf-8") as handle:
data = json.load(handle)
except FileNotFoundError:
return {}
except json.JSONDecodeError:
return {}
if not isinstance(data, dict):
return {}
normalized: Dict[str, Dict[str, Any]] = {}
for download_id, entry in data.items():
if isinstance(download_id, str) and isinstance(entry, dict):
normalized[download_id] = entry
return normalized
def _write_all_unlocked(self, data: Dict[str, Dict[str, Any]]) -> None:
directory = os.path.dirname(self._state_path)
if directory:
os.makedirs(directory, exist_ok=True)
temp_path = f"{self._state_path}.tmp"
with open(temp_path, "w", encoding="utf-8") as handle:
json.dump(data, handle, ensure_ascii=True, indent=2, sort_keys=True)
os.replace(temp_path, self._state_path)
async def load_all(self) -> Dict[str, Dict[str, Any]]:
async with self._lock:
return deepcopy(self._read_all_unlocked())
async def get(self, download_id: str) -> Optional[Dict[str, Any]]:
async with self._lock:
return deepcopy(self._read_all_unlocked().get(download_id))
async def upsert(self, download_id: str, payload: Dict[str, Any]) -> Dict[str, Any]:
async with self._lock:
data = self._read_all_unlocked()
current = data.get(download_id, {})
current.update(payload)
data[download_id] = current
self._write_all_unlocked(data)
return deepcopy(current)
async def remove(self, download_id: str) -> None:
async with self._lock:
data = self._read_all_unlocked()
if download_id in data:
del data[download_id]
self._write_all_unlocked(data)
async def find_by_save_path(
self, save_path: str, *, exclude_download_id: Optional[str] = None
) -> Optional[Dict[str, Any]]:
normalized_target = os.path.abspath(save_path)
async with self._lock:
data = self._read_all_unlocked()
for download_id, entry in data.items():
if exclude_download_id and download_id == exclude_download_id:
continue
candidate = entry.get("save_path")
if isinstance(candidate, str) and os.path.abspath(candidate) == normalized_target:
result = dict(entry)
result["download_id"] = download_id
return result
return None
async def reassign(self, from_download_id: str, to_download_id: str) -> Optional[Dict[str, Any]]:
async with self._lock:
data = self._read_all_unlocked()
existing = data.get(from_download_id)
if existing is None:
return None
updated = dict(existing)
updated["download_id"] = to_download_id
data[to_download_id] = updated
if from_download_id != to_download_id:
data.pop(from_download_id, None)
self._write_all_unlocked(data)
return deepcopy(updated)

View File

@@ -0,0 +1,411 @@
from __future__ import annotations
import asyncio
import contextlib
import hashlib
import json
import logging
import os
import shutil
import tempfile
import time
import zipfile
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Iterable, Optional
from ..utils.cache_paths import CacheType, get_cache_base_dir, get_cache_file_path
from ..utils.settings_paths import get_settings_dir
from .settings_manager import get_settings_manager
logger = logging.getLogger(__name__)
BACKUP_MANIFEST_VERSION = 1
DEFAULT_BACKUP_RETENTION_COUNT = 5
DEFAULT_BACKUP_INTERVAL_SECONDS = 24 * 60 * 60
@dataclass(frozen=True)
class BackupEntry:
kind: str
archive_path: str
target_path: str
sha256: str
size: int
mtime: float
class BackupService:
"""Create and restore user-state backup archives."""
_instance: "BackupService | None" = None
_instance_lock = asyncio.Lock()
def __init__(self, *, settings_manager=None, backup_dir: str | None = None) -> None:
self._settings = settings_manager or get_settings_manager()
self._backup_dir = Path(backup_dir or self._resolve_backup_dir())
self._backup_dir.mkdir(parents=True, exist_ok=True)
self._lock = asyncio.Lock()
self._auto_task: asyncio.Task[None] | None = None
@classmethod
async def get_instance(cls) -> "BackupService":
async with cls._instance_lock:
if cls._instance is None:
cls._instance = cls()
cls._instance._ensure_auto_snapshot_task()
return cls._instance
@staticmethod
def _resolve_backup_dir() -> str:
return os.path.join(get_settings_dir(create=True), "backups")
def get_backup_dir(self) -> str:
return str(self._backup_dir)
def _ensure_auto_snapshot_task(self) -> None:
if self._auto_task is not None and not self._auto_task.done():
return
try:
loop = asyncio.get_running_loop()
except RuntimeError:
return
self._auto_task = loop.create_task(self._auto_backup_loop())
def _get_setting_bool(self, key: str, default: bool) -> bool:
try:
return bool(self._settings.get(key, default))
except Exception:
return default
def _get_setting_int(self, key: str, default: int) -> int:
try:
value = self._settings.get(key, default)
return max(1, int(value))
except Exception:
return default
def _settings_file_path(self) -> str:
settings_file = getattr(self._settings, "settings_file", None)
if settings_file:
return str(settings_file)
return os.path.join(get_settings_dir(create=True), "settings.json")
def _download_history_path(self) -> str:
base_dir = get_cache_base_dir(create=True)
history_dir = os.path.join(base_dir, "download_history")
os.makedirs(history_dir, exist_ok=True)
return os.path.join(history_dir, "downloaded_versions.sqlite")
def _model_update_dir(self) -> str:
return str(Path(get_cache_file_path(CacheType.MODEL_UPDATE, create_dir=True)).parent)
def _model_update_targets(self) -> list[tuple[str, str, str]]:
"""Return (kind, archive_path, target_path) tuples for backup."""
targets: list[tuple[str, str, str]] = []
settings_path = self._settings_file_path()
targets.append(("settings", "settings/settings.json", settings_path))
history_path = self._download_history_path()
targets.append(
(
"download_history",
"cache/download_history/downloaded_versions.sqlite",
history_path,
)
)
symlink_path = get_cache_file_path(CacheType.SYMLINK, create_dir=True)
targets.append(
(
"symlink_map",
"cache/symlink/symlink_map.json",
symlink_path,
)
)
model_update_dir = Path(self._model_update_dir())
if model_update_dir.exists():
for sqlite_file in sorted(model_update_dir.glob("*.sqlite")):
targets.append(
(
"model_update",
f"cache/model_update/{sqlite_file.name}",
str(sqlite_file),
)
)
return targets
@staticmethod
def _hash_file(path: str) -> tuple[str, int, float]:
digest = hashlib.sha256()
total = 0
with open(path, "rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
total += len(chunk)
digest.update(chunk)
mtime = os.path.getmtime(path)
return digest.hexdigest(), total, mtime
def _build_manifest(self, entries: Iterable[BackupEntry], *, snapshot_type: str) -> dict[str, Any]:
created_at = datetime.now(timezone.utc).isoformat()
active_library = None
try:
active_library = self._settings.get_active_library_name()
except Exception:
active_library = None
return {
"manifest_version": BACKUP_MANIFEST_VERSION,
"created_at": created_at,
"snapshot_type": snapshot_type,
"active_library": active_library,
"files": [
{
"kind": entry.kind,
"archive_path": entry.archive_path,
"target_path": entry.target_path,
"sha256": entry.sha256,
"size": entry.size,
"mtime": entry.mtime,
}
for entry in entries
],
}
def _write_archive(self, archive_path: str, entries: list[BackupEntry], manifest: dict[str, Any]) -> None:
with zipfile.ZipFile(
archive_path,
mode="w",
compression=zipfile.ZIP_DEFLATED,
compresslevel=6,
) as zf:
zf.writestr(
"manifest.json",
json.dumps(manifest, indent=2, ensure_ascii=False).encode("utf-8"),
)
for entry in entries:
zf.write(entry.target_path, arcname=entry.archive_path)
async def create_snapshot(self, *, snapshot_type: str = "manual", persist: bool = False) -> dict[str, Any]:
"""Create a backup archive.
If ``persist`` is true, the archive is stored in the backup directory
and retained according to the configured retention policy.
"""
async with self._lock:
raw_targets = self._model_update_targets()
entries: list[BackupEntry] = []
for kind, archive_path, target_path in raw_targets:
if not os.path.exists(target_path):
continue
sha256, size, mtime = self._hash_file(target_path)
entries.append(
BackupEntry(
kind=kind,
archive_path=archive_path,
target_path=target_path,
sha256=sha256,
size=size,
mtime=mtime,
)
)
if not entries:
raise FileNotFoundError("No backupable files were found")
manifest = self._build_manifest(entries, snapshot_type=snapshot_type)
archive_name = self._build_archive_name(snapshot_type=snapshot_type)
fd, temp_path = tempfile.mkstemp(suffix=".zip", dir=str(self._backup_dir))
os.close(fd)
try:
self._write_archive(temp_path, entries, manifest)
if persist:
final_path = self._backup_dir / archive_name
os.replace(temp_path, final_path)
self._prune_snapshots()
return {
"archive_path": str(final_path),
"archive_name": final_path.name,
"manifest": manifest,
}
with open(temp_path, "rb") as handle:
data = handle.read()
return {
"archive_name": archive_name,
"archive_bytes": data,
"manifest": manifest,
}
finally:
with contextlib.suppress(FileNotFoundError):
os.remove(temp_path)
def _build_archive_name(self, *, snapshot_type: str) -> str:
timestamp = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
return f"lora-manager-backup-{timestamp}-{snapshot_type}.zip"
def _prune_snapshots(self) -> None:
retention = self._get_setting_int(
"backup_retention_count", DEFAULT_BACKUP_RETENTION_COUNT
)
archives = sorted(
self._backup_dir.glob("lora-manager-backup-*-auto.zip"),
key=lambda path: path.stat().st_mtime,
reverse=True,
)
for path in archives[retention:]:
with contextlib.suppress(OSError):
path.unlink()
async def restore_snapshot(self, archive_path: str) -> dict[str, Any]:
"""Restore backup contents from a ZIP archive."""
async with self._lock:
try:
zf = zipfile.ZipFile(archive_path, mode="r")
except zipfile.BadZipFile as exc:
raise ValueError("Backup archive is not a valid ZIP file") from exc
with zf:
try:
manifest = json.loads(zf.read("manifest.json").decode("utf-8"))
except KeyError as exc:
raise ValueError("Backup archive is missing manifest.json") from exc
if not isinstance(manifest, dict):
raise ValueError("Backup manifest is invalid")
if manifest.get("manifest_version") != BACKUP_MANIFEST_VERSION:
raise ValueError("Backup manifest version is not supported")
files = manifest.get("files", [])
if not isinstance(files, list):
raise ValueError("Backup manifest file list is invalid")
extracted_paths: list[tuple[str, str]] = []
temp_dir = Path(tempfile.mkdtemp(prefix="lora-manager-restore-"))
try:
for item in files:
if not isinstance(item, dict):
continue
archive_member = item.get("archive_path")
if not isinstance(archive_member, str) or not archive_member:
continue
archive_member_path = Path(archive_member)
if archive_member_path.is_absolute() or ".." in archive_member_path.parts:
raise ValueError(f"Invalid archive member path: {archive_member}")
kind = item.get("kind")
target_path = self._resolve_restore_target(kind, archive_member)
if target_path is None:
continue
extracted_path = temp_dir / archive_member_path
extracted_path.parent.mkdir(parents=True, exist_ok=True)
with zf.open(archive_member) as source, open(
extracted_path, "wb"
) as destination:
shutil.copyfileobj(source, destination)
expected_hash = item.get("sha256")
if isinstance(expected_hash, str) and expected_hash:
actual_hash, _, _ = self._hash_file(str(extracted_path))
if actual_hash != expected_hash:
raise ValueError(
f"Checksum mismatch for {archive_member}"
)
extracted_paths.append((str(extracted_path), target_path))
for extracted_path, target_path in extracted_paths:
os.makedirs(os.path.dirname(target_path), exist_ok=True)
os.replace(extracted_path, target_path)
finally:
shutil.rmtree(temp_dir, ignore_errors=True)
return {
"success": True,
"restored_files": len(extracted_paths),
"snapshot_type": manifest.get("snapshot_type"),
}
def _resolve_restore_target(self, kind: Any, archive_member: str) -> str | None:
if kind == "settings":
return self._settings_file_path()
if kind == "download_history":
return self._download_history_path()
if kind == "symlink_map":
return get_cache_file_path(CacheType.SYMLINK, create_dir=True)
if kind == "model_update":
filename = os.path.basename(archive_member)
return str(Path(get_cache_file_path(CacheType.MODEL_UPDATE, create_dir=True)).parent / filename)
return None
async def create_auto_snapshot_if_due(self) -> Optional[dict[str, Any]]:
if not self._get_setting_bool("backup_auto_enabled", True):
return None
latest = self.get_latest_auto_snapshot()
now = time.time()
if latest and now - latest["mtime"] < DEFAULT_BACKUP_INTERVAL_SECONDS:
return None
return await self.create_snapshot(snapshot_type="auto", persist=True)
async def _auto_backup_loop(self) -> None:
while True:
try:
await self.create_auto_snapshot_if_due()
await asyncio.sleep(DEFAULT_BACKUP_INTERVAL_SECONDS)
except asyncio.CancelledError:
raise
except Exception as exc: # pragma: no cover - defensive guard
logger.warning("Automatic backup snapshot failed: %s", exc, exc_info=True)
await asyncio.sleep(60)
def get_available_snapshots(self) -> list[dict[str, Any]]:
snapshots: list[dict[str, Any]] = []
for path in sorted(self._backup_dir.glob("lora-manager-backup-*.zip")):
try:
stat = path.stat()
except OSError:
continue
snapshots.append(
{
"name": path.name,
"path": str(path),
"size": stat.st_size,
"mtime": stat.st_mtime,
"is_auto": path.name.endswith("-auto.zip"),
}
)
snapshots.sort(key=lambda item: item["mtime"], reverse=True)
return snapshots
def get_latest_auto_snapshot(self) -> Optional[dict[str, Any]]:
autos = [snapshot for snapshot in self.get_available_snapshots() if snapshot["is_auto"]]
if not autos:
return None
return autos[0]
def get_status(self) -> dict[str, Any]:
snapshots = self.get_available_snapshots()
return {
"backupDir": self.get_backup_dir(),
"enabled": self._get_setting_bool("backup_auto_enabled", True),
"retentionCount": self._get_setting_int(
"backup_retention_count", DEFAULT_BACKUP_RETENTION_COUNT
),
"snapshotCount": len(snapshots),
"latestSnapshot": snapshots[0] if snapshots else None,
"latestAutoSnapshot": self.get_latest_auto_snapshot(),
}

View File

@@ -1,5 +1,6 @@
from abc import ABC, abstractmethod
import asyncio
import re
from typing import Any, Dict, List, Optional, Type, TYPE_CHECKING
import logging
import os
@@ -19,6 +20,7 @@ from .model_query import (
resolve_sub_type,
)
from .settings_manager import get_settings_manager
from ..utils.civitai_utils import build_civitai_model_page_url
logger = logging.getLogger(__name__)
@@ -177,6 +179,57 @@ class BaseModelService(ABC):
)
return paginated
async def get_excluded_paginated_data(
self,
page: int,
page_size: int,
sort_by: str = "name",
search: str = None,
fuzzy_search: bool = False,
search_options: dict = None,
**kwargs,
) -> Dict:
"""Get paginated excluded model data."""
excluded_paths = list(self.scanner.get_excluded_models())
excluded_entries: List[Dict[str, Any]] = []
stale_paths: List[str] = []
for file_path in excluded_paths:
if not file_path or not os.path.exists(file_path):
stale_paths.append(file_path)
continue
entry = await self._build_excluded_entry(file_path)
if entry:
excluded_entries.append(entry)
else:
stale_paths.append(file_path)
if stale_paths:
current_excluded = getattr(self.scanner, "_excluded_models", None)
if isinstance(current_excluded, list):
stale_set = set(stale_paths)
self.scanner._excluded_models = [
path for path in current_excluded if path not in stale_set
]
persist_current_cache = getattr(self.scanner, "_persist_current_cache", None)
if callable(persist_current_cache):
await persist_current_cache()
excluded_entries = self._sort_entries(excluded_entries, sort_by)
if search:
excluded_entries = await self._apply_search_filters(
excluded_entries,
search,
fuzzy_search,
search_options,
)
paginated = self._paginate(excluded_entries, page, page_size)
paginated["items"] = await self._annotate_update_flags(paginated["items"])
return paginated
async def _fetch_with_usage_sort(self, sort_params):
"""Fetch data sorted by usage count (desc/asc)."""
cache = await self.cache_repository.get_cache()
@@ -207,11 +260,71 @@ class BaseModelService(ABC):
reverse = sort_params.order == "desc"
annotated.sort(
key=lambda x: (x.get("usage_count", 0), x.get("model_name", "").lower()),
key=lambda x: (
x.get("usage_count", 0),
x.get("model_name", "").lower(),
x.get("file_path", "").lower()
),
reverse=reverse,
)
return annotated
def _sort_entries(self, data: List[Dict[str, Any]], sort_by: str) -> List[Dict[str, Any]]:
sort_params = self.cache_repository.parse_sort(sort_by)
key_name = sort_params.key
if key_name == "date":
key_fn = lambda item: (
float(item.get("modified", 0.0) or 0.0),
(item.get("model_name") or item.get("file_name") or "").lower(),
item.get("file_path", "").lower(),
)
elif key_name == "size":
key_fn = lambda item: (
int(item.get("size", 0) or 0),
(item.get("model_name") or item.get("file_name") or "").lower(),
item.get("file_path", "").lower(),
)
elif key_name == "usage":
key_fn = lambda item: (
int(item.get("usage_count", 0) or 0),
(item.get("model_name") or item.get("file_name") or "").lower(),
item.get("file_path", "").lower(),
)
else:
key_fn = lambda item: (
(item.get("model_name") or item.get("file_name") or "").lower(),
item.get("file_path", "").lower(),
)
return sorted(data, key=key_fn, reverse=sort_params.order == "desc")
async def _build_excluded_entry(self, file_path: str) -> Optional[Dict[str, Any]]:
root_path = self.scanner._find_root_for_file(file_path)
if not root_path:
return None
metadata, should_skip = await MetadataManager.load_metadata(
file_path,
self.metadata_class,
)
if should_skip:
return None
if metadata is None:
metadata = await self.scanner._create_default_metadata(file_path)
if metadata is None:
return None
metadata = self.scanner.adjust_metadata(metadata, file_path, root_path)
folder = os.path.dirname(os.path.relpath(file_path, root_path)).replace(
os.path.sep, "/"
)
entry = self.scanner._build_cache_entry(metadata, folder=folder)
entry = self.scanner.adjust_cached_entry(entry)
entry["exclude"] = True
return entry
async def _apply_hash_filters(
self, data: List[Dict], hash_filters: Dict
) -> List[Dict]:
@@ -383,7 +496,9 @@ class BaseModelService(ABC):
# Check user setting for hiding early access updates
hide_early_access = False
try:
hide_early_access = bool(self.settings.get("hide_early_access_updates", False))
hide_early_access = bool(
self.settings.get("hide_early_access_updates", False)
)
except Exception:
hide_early_access = False
@@ -413,7 +528,11 @@ class BaseModelService(ABC):
bulk_method = getattr(self.update_service, "has_updates_bulk", None)
if callable(bulk_method):
try:
resolved = await bulk_method(self.model_type, ordered_ids, hide_early_access=hide_early_access)
resolved = await bulk_method(
self.model_type,
ordered_ids,
hide_early_access=hide_early_access,
)
except Exception as exc:
logger.error(
"Failed to resolve update status in bulk for %s models (%s): %s",
@@ -426,7 +545,9 @@ class BaseModelService(ABC):
if resolved is None:
tasks = [
self.update_service.has_update(self.model_type, model_id, hide_early_access=hide_early_access)
self.update_service.has_update(
self.model_type, model_id, hide_early_access=hide_early_access
)
for model_id in ordered_ids
]
results = await asyncio.gather(*tasks, return_exceptions=True)
@@ -590,9 +711,15 @@ class BaseModelService(ABC):
continue
# Filter by valid sub-types based on scanner type
if self.model_type == "lora" and normalized_type not in VALID_LORA_SUB_TYPES:
if (
self.model_type == "lora"
and normalized_type not in VALID_LORA_SUB_TYPES
):
continue
if self.model_type == "checkpoint" and normalized_type not in VALID_CHECKPOINT_SUB_TYPES:
if (
self.model_type == "checkpoint"
and normalized_type not in VALID_CHECKPOINT_SUB_TYPES
):
continue
type_counts[normalized_type] = type_counts.get(normalized_type, 0) + 1
@@ -755,9 +882,12 @@ class BaseModelService(ABC):
version_id = civitai_data.get("id")
if model_id:
civitai_url = f"https://civitai.com/models/{model_id}"
if version_id:
civitai_url += f"?modelVersionId={version_id}"
civitai_host = self.settings.get("civitai_host", "civitai.com")
civitai_url = build_civitai_model_page_url(
model_id,
version_id,
host=civitai_host,
)
return {
"civitai_url": civitai_url,
@@ -807,38 +937,61 @@ class BaseModelService(ABC):
return include_terms, exclude_terms
@staticmethod
def _remove_model_extension(path: str) -> str:
"""Remove model file extension (.safetensors, .ckpt, .pt, .bin) for cleaner matching."""
return re.sub(r"\.(safetensors|ckpt|pt|bin)$", "", path, flags=re.IGNORECASE)
@staticmethod
def _relative_path_matches_tokens(
path_lower: str, include_terms: List[str], exclude_terms: List[str]
) -> bool:
"""Determine whether a relative path string satisfies include/exclude tokens."""
if any(term and term in path_lower for term in exclude_terms):
"""Determine whether a relative path string satisfies include/exclude tokens.
Matches against the path without extension to avoid matching .safetensors
when searching for 's'.
"""
# Use path without extension for matching
path_for_matching = BaseModelService._remove_model_extension(path_lower)
if any(term and term in path_for_matching for term in exclude_terms):
return False
for term in include_terms:
if term and term not in path_lower:
if term and term not in path_for_matching:
return False
return True
@staticmethod
def _relative_path_sort_key(relative_path: str, include_terms: List[str]) -> tuple:
"""Sort paths by how well they satisfy the include tokens."""
path_lower = relative_path.lower()
"""Sort paths by how well they satisfy the include tokens.
Sorts based on path without extension for consistent ordering.
"""
# Use path without extension for sorting
path_for_sorting = BaseModelService._remove_model_extension(
relative_path.lower()
)
prefix_hits = sum(
1 for term in include_terms if term and path_lower.startswith(term)
1 for term in include_terms if term and path_for_sorting.startswith(term)
)
match_positions = [
path_lower.find(term)
path_for_sorting.find(term)
for term in include_terms
if term and term in path_lower
if term and term in path_for_sorting
]
first_match_index = min(match_positions) if match_positions else 0
return (-prefix_hits, first_match_index, len(relative_path), path_lower)
return (
-prefix_hits,
first_match_index,
len(path_for_sorting),
path_for_sorting,
)
async def search_relative_paths(
self, search_term: str, limit: int = 15
self, search_term: str, limit: int = 15, offset: int = 0
) -> List[str]:
"""Search model relative file paths for autocomplete functionality"""
cache = await self.scanner.get_cached_data()
@@ -849,6 +1002,7 @@ class BaseModelService(ABC):
# Get model roots for path calculation
model_roots = self.scanner.get_model_roots()
# Collect all matching paths first (needed for proper sorting and offset)
for model in cache.raw_data:
file_path = model.get("file_path", "")
if not file_path:
@@ -877,12 +1031,12 @@ class BaseModelService(ABC):
):
matching_paths.append(relative_path)
if len(matching_paths) >= limit * 2: # Get more for better sorting
break
# Sort by relevance (prefix and earliest hits first, then by length and alphabetically)
matching_paths.sort(
key=lambda relative: self._relative_path_sort_key(relative, include_terms)
)
return matching_paths[:limit]
# Apply offset and limit
start = min(offset, len(matching_paths))
end = min(start + limit, len(matching_paths))
return matching_paths[start:end]

View File

@@ -0,0 +1,593 @@
"""Batch import service for importing multiple images as recipes."""
from __future__ import annotations
import asyncio
import logging
import os
import time
import uuid
from dataclasses import dataclass, field
from enum import Enum
from typing import Any, Callable, Dict, List, Optional, Set
from aiohttp import web
from .recipes import (
RecipeAnalysisService,
RecipePersistenceService,
RecipeValidationError,
RecipeDownloadError,
RecipeNotFoundError,
)
class ImportItemType(Enum):
"""Type of import item."""
URL = "url"
LOCAL_PATH = "local_path"
class ImportStatus(Enum):
"""Status of an individual import item."""
PENDING = "pending"
PROCESSING = "processing"
SUCCESS = "success"
FAILED = "failed"
SKIPPED = "skipped"
@dataclass
class BatchImportItem:
"""Represents a single item to import."""
id: str
source: str
item_type: ImportItemType
status: ImportStatus = ImportStatus.PENDING
error_message: Optional[str] = None
recipe_name: Optional[str] = None
recipe_id: Optional[str] = None
duration: float = 0.0
@dataclass
class BatchImportProgress:
"""Tracks progress of a batch import operation."""
operation_id: str
total: int
completed: int = 0
success: int = 0
failed: int = 0
skipped: int = 0
current_item: str = ""
status: str = "pending"
started_at: float = field(default_factory=time.time)
finished_at: Optional[float] = None
items: List[BatchImportItem] = field(default_factory=list)
tags: List[str] = field(default_factory=list)
skip_no_metadata: bool = False
skip_duplicates: bool = False
def to_dict(self) -> Dict[str, Any]:
return {
"operation_id": self.operation_id,
"total": self.total,
"completed": self.completed,
"success": self.success,
"failed": self.failed,
"skipped": self.skipped,
"current_item": self.current_item,
"status": self.status,
"started_at": self.started_at,
"finished_at": self.finished_at,
"progress_percent": round((self.completed / self.total) * 100, 1)
if self.total > 0
else 0,
"items": [
{
"id": item.id,
"source": item.source,
"item_type": item.item_type.value,
"status": item.status.value,
"error_message": item.error_message,
"recipe_name": item.recipe_name,
"recipe_id": item.recipe_id,
"duration": item.duration,
}
for item in self.items
],
}
class AdaptiveConcurrencyController:
"""Adjusts concurrency based on task performance."""
def __init__(
self,
min_concurrency: int = 1,
max_concurrency: int = 5,
initial_concurrency: int = 3,
) -> None:
self.min_concurrency = min_concurrency
self.max_concurrency = max_concurrency
self.current_concurrency = initial_concurrency
self._task_durations: List[float] = []
self._recent_errors = 0
self._recent_successes = 0
def record_result(self, duration: float, success: bool) -> None:
self._task_durations.append(duration)
if len(self._task_durations) > 10:
self._task_durations.pop(0)
if success:
self._recent_successes += 1
if duration < 1.0 and self.current_concurrency < self.max_concurrency:
self.current_concurrency = min(
self.current_concurrency + 1, self.max_concurrency
)
elif duration > 10.0 and self.current_concurrency > self.min_concurrency:
self.current_concurrency = max(
self.current_concurrency - 1, self.min_concurrency
)
else:
self._recent_errors += 1
if self.current_concurrency > self.min_concurrency:
self.current_concurrency = max(
self.current_concurrency - 1, self.min_concurrency
)
def reset_counters(self) -> None:
self._recent_errors = 0
self._recent_successes = 0
def get_semaphore(self) -> asyncio.Semaphore:
return asyncio.Semaphore(self.current_concurrency)
class BatchImportService:
"""Service for batch importing images as recipes."""
SUPPORTED_EXTENSIONS: Set[str] = {".jpg", ".jpeg", ".png", ".webp", ".gif", ".bmp"}
def __init__(
self,
*,
analysis_service: RecipeAnalysisService,
persistence_service: RecipePersistenceService,
ws_manager: Any,
logger: logging.Logger,
) -> None:
self._analysis_service = analysis_service
self._persistence_service = persistence_service
self._ws_manager = ws_manager
self._logger = logger
self._active_operations: Dict[str, BatchImportProgress] = {}
self._cancellation_flags: Dict[str, bool] = {}
self._concurrency_controller = AdaptiveConcurrencyController()
def is_import_running(self, operation_id: Optional[str] = None) -> bool:
if operation_id:
progress = self._active_operations.get(operation_id)
return progress is not None and progress.status in ("pending", "running")
return any(
p.status in ("pending", "running") for p in self._active_operations.values()
)
def get_progress(self, operation_id: str) -> Optional[BatchImportProgress]:
return self._active_operations.get(operation_id)
def cancel_import(self, operation_id: str) -> bool:
if operation_id in self._active_operations:
self._cancellation_flags[operation_id] = True
return True
return False
def _validate_url(self, url: str) -> bool:
import re
url_pattern = re.compile(
r"^https?://"
r"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?|"
r"localhost|"
r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
r"(?::\d+)?"
r"(?:/?|[/?]\S+)$",
re.IGNORECASE,
)
return url_pattern.match(url) is not None
def _validate_local_path(self, path: str) -> bool:
try:
normalized = os.path.normpath(path)
if not os.path.isabs(normalized):
return False
if ".." in normalized:
return False
return True
except Exception:
return False
def _is_duplicate_source(
self,
source: str,
item_type: ImportItemType,
recipe_scanner: Any,
) -> bool:
try:
cache = recipe_scanner.get_cached_data_sync()
if not cache:
return False
for recipe in getattr(cache, "raw_data", []):
source_path = recipe.get("source_path") or recipe.get("source_url")
if source_path and source_path == source:
return True
return False
except Exception:
self._logger.warning("Failed to check for duplicates", exc_info=True)
return False
async def start_batch_import(
self,
*,
recipe_scanner_getter: Callable[[], Any],
civitai_client_getter: Callable[[], Any],
items: List[Dict[str, str]],
tags: Optional[List[str]] = None,
skip_no_metadata: bool = False,
skip_duplicates: bool = False,
) -> str:
operation_id = str(uuid.uuid4())
import_items = []
for idx, item in enumerate(items):
source = item.get("source", "")
item_type_str = item.get("type", "url")
if item_type_str == "url" or source.startswith(("http://", "https://")):
item_type = ImportItemType.URL
else:
item_type = ImportItemType.LOCAL_PATH
batch_import_item = BatchImportItem(
id=f"{operation_id}_{idx}",
source=source,
item_type=item_type,
)
import_items.append(batch_import_item)
progress = BatchImportProgress(
operation_id=operation_id,
total=len(import_items),
items=import_items,
tags=tags or [],
skip_no_metadata=skip_no_metadata,
skip_duplicates=skip_duplicates,
)
self._active_operations[operation_id] = progress
self._cancellation_flags[operation_id] = False
asyncio.create_task(
self._run_batch_import(
operation_id=operation_id,
recipe_scanner_getter=recipe_scanner_getter,
civitai_client_getter=civitai_client_getter,
)
)
return operation_id
async def start_directory_import(
self,
*,
recipe_scanner_getter: Callable[[], Any],
civitai_client_getter: Callable[[], Any],
directory: str,
recursive: bool = True,
tags: Optional[List[str]] = None,
skip_no_metadata: bool = False,
skip_duplicates: bool = False,
) -> str:
image_paths = await self._discover_images(directory, recursive)
items = [{"source": path, "type": "local_path"} for path in image_paths]
return await self.start_batch_import(
recipe_scanner_getter=recipe_scanner_getter,
civitai_client_getter=civitai_client_getter,
items=items,
tags=tags,
skip_no_metadata=skip_no_metadata,
skip_duplicates=skip_duplicates,
)
async def _discover_images(
self,
directory: str,
recursive: bool = True,
) -> List[str]:
if not os.path.isdir(directory):
raise RecipeValidationError(f"Directory not found: {directory}")
image_paths: List[str] = []
if recursive:
for root, _, files in os.walk(directory):
for filename in files:
if self._is_supported_image(filename):
image_paths.append(os.path.join(root, filename))
else:
for filename in os.listdir(directory):
filepath = os.path.join(directory, filename)
if os.path.isfile(filepath) and self._is_supported_image(filename):
image_paths.append(filepath)
return sorted(image_paths)
def _is_supported_image(self, filename: str) -> bool:
ext = os.path.splitext(filename)[1].lower()
return ext in self.SUPPORTED_EXTENSIONS
async def _run_batch_import(
self,
*,
operation_id: str,
recipe_scanner_getter: Callable[[], Any],
civitai_client_getter: Callable[[], Any],
) -> None:
progress = self._active_operations.get(operation_id)
if not progress:
return
progress.status = "running"
await self._broadcast_progress(progress)
self._concurrency_controller = AdaptiveConcurrencyController()
async def process_item(item: BatchImportItem) -> None:
if self._cancellation_flags.get(operation_id, False):
return
progress.current_item = (
os.path.basename(item.source)
if item.item_type == ImportItemType.LOCAL_PATH
else item.source[:50]
)
item.status = ImportStatus.PROCESSING
await self._broadcast_progress(progress)
start_time = time.time()
try:
result = await self._import_single_item(
item=item,
recipe_scanner_getter=recipe_scanner_getter,
civitai_client_getter=civitai_client_getter,
tags=progress.tags,
skip_no_metadata=progress.skip_no_metadata,
skip_duplicates=progress.skip_duplicates,
semaphore=self._concurrency_controller.get_semaphore(),
)
duration = time.time() - start_time
item.duration = duration
self._concurrency_controller.record_result(
duration, result.get("success", False)
)
if result.get("success"):
item.status = ImportStatus.SUCCESS
item.recipe_name = result.get("recipe_name")
item.recipe_id = result.get("recipe_id")
progress.success += 1
elif result.get("skipped"):
item.status = ImportStatus.SKIPPED
item.error_message = result.get("error")
progress.skipped += 1
else:
item.status = ImportStatus.FAILED
item.error_message = result.get("error")
progress.failed += 1
except Exception as e:
self._logger.error(f"Error importing {item.source}: {e}")
item.status = ImportStatus.FAILED
item.error_message = str(e)
item.duration = time.time() - start_time
progress.failed += 1
self._concurrency_controller.record_result(item.duration, False)
progress.completed += 1
await self._broadcast_progress(progress)
tasks = [process_item(item) for item in progress.items]
await asyncio.gather(*tasks, return_exceptions=True)
if self._cancellation_flags.get(operation_id, False):
progress.status = "cancelled"
else:
progress.status = "completed"
progress.finished_at = time.time()
progress.current_item = ""
await self._broadcast_progress(progress)
await asyncio.sleep(5)
self._cleanup_operation(operation_id)
async def _import_single_item(
self,
*,
item: BatchImportItem,
recipe_scanner_getter: Callable[[], Any],
civitai_client_getter: Callable[[], Any],
tags: List[str],
skip_no_metadata: bool,
skip_duplicates: bool,
semaphore: asyncio.Semaphore,
) -> Dict[str, Any]:
async with semaphore:
recipe_scanner = recipe_scanner_getter()
if recipe_scanner is None:
return {"success": False, "error": "Recipe scanner unavailable"}
try:
if item.item_type == ImportItemType.URL:
if not self._validate_url(item.source):
return {
"success": False,
"error": f"Invalid URL format: {item.source}",
}
if skip_duplicates:
if self._is_duplicate_source(
item.source, item.item_type, recipe_scanner
):
return {
"success": False,
"skipped": True,
"error": "Duplicate source URL",
}
civitai_client = civitai_client_getter()
analysis_result = await self._analysis_service.analyze_remote_image(
url=item.source,
recipe_scanner=recipe_scanner,
civitai_client=civitai_client,
)
else:
if not self._validate_local_path(item.source):
return {
"success": False,
"error": f"Invalid or unsafe path: {item.source}",
}
if not os.path.exists(item.source):
return {
"success": False,
"error": f"File not found: {item.source}",
}
if skip_duplicates:
if self._is_duplicate_source(
item.source, item.item_type, recipe_scanner
):
return {
"success": False,
"skipped": True,
"error": "Duplicate source path",
}
analysis_result = await self._analysis_service.analyze_local_image(
file_path=item.source,
recipe_scanner=recipe_scanner,
)
payload = analysis_result.payload
if payload.get("error"):
if skip_no_metadata and "No metadata" in payload.get("error", ""):
return {
"success": False,
"skipped": True,
"error": payload["error"],
}
return {"success": False, "error": payload["error"]}
loras = payload.get("loras", [])
if not loras:
if skip_no_metadata:
return {
"success": False,
"skipped": True,
"error": "No LoRAs found in image",
}
# When skip_no_metadata is False, allow importing images without LoRAs
# Continue with empty loras list
recipe_name = self._generate_recipe_name(item, payload)
all_tags = list(set(tags + (payload.get("tags", []) or [])))
metadata = {
"base_model": payload.get("base_model", ""),
"loras": loras,
"gen_params": payload.get("gen_params", {}),
"source_path": item.source,
}
if payload.get("checkpoint"):
metadata["checkpoint"] = payload["checkpoint"]
image_bytes = None
image_base64 = payload.get("image_base64")
if item.item_type == ImportItemType.LOCAL_PATH:
with open(item.source, "rb") as f:
image_bytes = f.read()
image_base64 = None
save_result = await self._persistence_service.save_recipe(
recipe_scanner=recipe_scanner,
image_bytes=image_bytes,
image_base64=image_base64,
name=recipe_name,
tags=all_tags,
metadata=metadata,
extension=payload.get("extension"),
)
if save_result.status == 200:
return {
"success": True,
"recipe_name": recipe_name,
"recipe_id": save_result.payload.get("id"),
}
else:
return {
"success": False,
"error": save_result.payload.get(
"error", "Failed to save recipe"
),
}
except RecipeValidationError as e:
return {"success": False, "error": str(e)}
except RecipeDownloadError as e:
return {"success": False, "error": str(e)}
except RecipeNotFoundError as e:
return {"success": False, "skipped": True, "error": str(e)}
except Exception as e:
self._logger.error(
f"Unexpected error importing {item.source}: {e}", exc_info=True
)
return {"success": False, "error": str(e)}
def _generate_recipe_name(
self, item: BatchImportItem, payload: Dict[str, Any]
) -> str:
if item.item_type == ImportItemType.LOCAL_PATH:
base_name = os.path.splitext(os.path.basename(item.source))[0]
return base_name[:100]
else:
loras = payload.get("loras", [])
if loras:
first_lora = loras[0].get("name", "Recipe")
return f"Import - {first_lora}"[:100]
return f"Imported Recipe {item.id[:8]}"
async def _broadcast_progress(self, progress: BatchImportProgress) -> None:
await self._ws_manager.broadcast(
{
"type": "batch_import_progress",
**progress.to_dict(),
}
)
def _cleanup_operation(self, operation_id: str) -> None:
if operation_id in self._cancellation_flags:
del self._cancellation_flags[operation_id]

View File

@@ -58,6 +58,7 @@ class CacheEntryValidator:
'preview_nsfw_level': (0, False),
'notes': ('', False),
'usage_tips': ('', False),
'hash_status': ('completed', False),
}
@classmethod
@@ -90,13 +91,31 @@ class CacheEntryValidator:
errors: List[str] = []
repaired = False
# If auto_repair is on, we work on a copy. If not, we still need a safe way to check fields.
working_entry = dict(entry) if auto_repair else entry
# Determine effective hash_status for validation logic
hash_status = entry.get('hash_status')
if hash_status is None:
if auto_repair:
working_entry['hash_status'] = 'completed'
repaired = True
hash_status = 'completed'
for field_name, (default_value, is_required) in cls.CORE_FIELDS.items():
value = working_entry.get(field_name)
# Get current value from the original entry to avoid side effects during validation
value = entry.get(field_name)
# Check if field is missing or None
if value is None:
# Special case: sha256 can be None/empty if hash_status is pending
if field_name == 'sha256' and hash_status == 'pending':
if auto_repair:
working_entry[field_name] = ''
repaired = True
continue
if is_required:
errors.append(f"Required field '{field_name}' is missing or None")
if auto_repair:
@@ -107,6 +126,10 @@ class CacheEntryValidator:
# Validate field type and value
field_error = cls._validate_field(field_name, value, default_value)
if field_error:
# Special case: allow empty string for sha256 if pending
if field_name == 'sha256' and hash_status == 'pending' and value == '':
continue
errors.append(field_error)
if auto_repair:
working_entry[field_name] = cls._get_default_copy(default_value)
@@ -125,23 +148,32 @@ class CacheEntryValidator:
)
# Special validation: sha256 must not be empty for required field
# BUT allow empty sha256 when hash_status is pending (lazy hash calculation)
sha256 = working_entry.get('sha256', '')
# Use the effective hash_status we determined earlier
if not sha256 or (isinstance(sha256, str) and not sha256.strip()):
errors.append("Required field 'sha256' is empty")
# Cannot repair empty sha256 - entry is invalid
return ValidationResult(
is_valid=False,
repaired=repaired,
errors=errors,
entry=working_entry if auto_repair else None
)
# Allow empty sha256 for lazy hash calculation (checkpoints)
if hash_status != 'pending':
errors.append("Required field 'sha256' is empty")
# Cannot repair empty sha256 - entry is invalid
return ValidationResult(
is_valid=False,
repaired=repaired,
errors=errors,
entry=working_entry if auto_repair else None
)
# Normalize sha256 to lowercase if needed
if isinstance(sha256, str):
normalized_sha = sha256.lower().strip()
if normalized_sha != sha256:
working_entry['sha256'] = normalized_sha
repaired = True
if auto_repair:
working_entry['sha256'] = normalized_sha
repaired = True
else:
# If not auto-repairing, we don't consider case difference as a "critical error"
# that invalidates the entry, but we also don't mark it repaired.
pass
# Determine if entry is valid
# Entry is valid if no critical required field errors remain after repair

View File

@@ -1,3 +1,4 @@
import asyncio
import json
import logging
import os
@@ -13,20 +14,36 @@ from .model_hash_index import ModelHashIndex
logger = logging.getLogger(__name__)
class CheckpointScanner(ModelScanner):
"""Service for scanning and managing checkpoint files"""
def __init__(self):
# Define supported file extensions
file_extensions = {'.ckpt', '.pt', '.pt2', '.bin', '.pth', '.safetensors', '.pkl', '.sft', '.gguf'}
file_extensions = {
".ckpt",
".pt",
".pt2",
".bin",
".pth",
".safetensors",
".pkl",
".sft",
".gguf",
}
super().__init__(
model_type="checkpoint",
model_class=CheckpointMetadata,
file_extensions=file_extensions,
hash_index=ModelHashIndex()
hash_index=ModelHashIndex(),
)
if not hasattr(self, "_hash_calculation_lock"):
self._hash_calculation_lock = asyncio.Lock()
self._hash_calculation_tasks: dict[str, asyncio.Task[Optional[str]]] = {}
async def _create_default_metadata(self, file_path: str) -> Optional[CheckpointMetadata]:
async def _create_default_metadata(
self, file_path: str
) -> Optional[CheckpointMetadata]:
"""Create default metadata for checkpoint without calculating hash (lazy hash).
Checkpoints are typically large (10GB+), so we skip hash calculation during initial
@@ -59,7 +76,7 @@ class CheckpointScanner(ModelScanner):
modelDescription="",
sub_type="checkpoint",
from_civitai=False, # Mark as local model since no hash yet
hash_status="pending" # Mark hash as pending
hash_status="pending", # Mark hash as pending
)
# Save the created metadata
@@ -69,11 +86,13 @@ class CheckpointScanner(ModelScanner):
return metadata
except Exception as e:
logger.error(f"Error creating default checkpoint metadata for {file_path}: {e}")
logger.error(
f"Error creating default checkpoint metadata for {file_path}: {e}"
)
return None
async def calculate_hash_for_model(self, file_path: str) -> Optional[str]:
"""Calculate hash for a checkpoint on-demand.
"""Calculate hash for a checkpoint on-demand with per-file singleflight.
Args:
file_path: Path to the model file
@@ -81,19 +100,78 @@ class CheckpointScanner(ModelScanner):
Returns:
SHA256 hash string, or None if calculation failed
"""
from ..utils.file_utils import calculate_sha256
try:
real_path = os.path.realpath(file_path)
if not os.path.exists(real_path):
logger.error(f"File not found for hash calculation: {file_path}")
return None
metadata, _ = await MetadataManager.load_metadata(
file_path, self.model_class
)
if (
metadata is not None
and metadata.hash_status == "completed"
and metadata.sha256
):
return metadata.sha256
async with self._hash_calculation_lock:
metadata, _ = await MetadataManager.load_metadata(
file_path, self.model_class
)
if (
metadata is not None
and metadata.hash_status == "completed"
and metadata.sha256
):
return metadata.sha256
task = self._hash_calculation_tasks.get(real_path)
if task is None:
task = asyncio.create_task(
self._run_hash_calculation_task(file_path, real_path)
)
self._hash_calculation_tasks[real_path] = task
return await asyncio.shield(task)
except Exception as e:
logger.error(f"Error calculating hash for {file_path}: {e}")
return None
async def _run_hash_calculation_task(
self, file_path: str, real_path: str
) -> Optional[str]:
"""Run a hash calculation task and remove it from the in-flight map."""
try:
return await self._calculate_hash_for_model_uncached(file_path, real_path)
finally:
task = asyncio.current_task()
async with self._hash_calculation_lock:
if self._hash_calculation_tasks.get(real_path) is task:
del self._hash_calculation_tasks[real_path]
async def _calculate_hash_for_model_uncached(
self, file_path: str, real_path: str
) -> Optional[str]:
"""Calculate hash for a checkpoint without checking in-flight tasks."""
from ..utils.file_utils import calculate_sha256
try:
# Load current metadata
metadata, _ = await MetadataManager.load_metadata(file_path, self.model_class)
metadata, should_skip = await MetadataManager.load_metadata(
file_path, self.model_class
)
if metadata is None:
logger.error(f"No metadata found for {file_path}")
return None
if should_skip:
logger.error(f"Invalid metadata found for {file_path}")
return None
created_metadata = await self._create_default_metadata(file_path)
if created_metadata is None:
logger.error(f"No metadata found for {file_path}")
return None
metadata = created_metadata
# Check if hash is already calculated
if metadata.hash_status == "completed" and metadata.sha256:
@@ -122,7 +200,9 @@ class CheckpointScanner(ModelScanner):
logger.error(f"Error calculating hash for {file_path}: {e}")
# Update status to failed
try:
metadata, _ = await MetadataManager.load_metadata(file_path, self.model_class)
metadata, _ = await MetadataManager.load_metadata(
file_path, self.model_class
)
if metadata:
metadata.hash_status = "failed"
await MetadataManager.save_metadata(file_path, metadata)
@@ -130,7 +210,9 @@ class CheckpointScanner(ModelScanner):
pass
return None
async def calculate_all_pending_hashes(self, progress_callback=None) -> Dict[str, int]:
async def calculate_all_pending_hashes(
self, progress_callback=None
) -> Dict[str, int]:
"""Calculate hashes for all checkpoints with pending hash status.
If cache is not initialized, scans filesystem directly for metadata files
@@ -148,22 +230,23 @@ class CheckpointScanner(ModelScanner):
if cache and cache.raw_data:
# Use cache if available
pending_models = [
item for item in cache.raw_data
if item.get('hash_status') != 'completed' or not item.get('sha256')
item
for item in cache.raw_data
if item.get("hash_status") != "completed" or not item.get("sha256")
]
else:
# Cache not initialized, scan filesystem directly
pending_models = await self._find_pending_models_from_filesystem()
if not pending_models:
return {'completed': 0, 'failed': 0, 'total': 0}
return {"completed": 0, "failed": 0, "total": 0}
total = len(pending_models)
completed = 0
failed = 0
for i, model_data in enumerate(pending_models):
file_path = model_data.get('file_path')
file_path = model_data.get("file_path")
if not file_path:
continue
@@ -183,11 +266,7 @@ class CheckpointScanner(ModelScanner):
except Exception:
pass
return {
'completed': completed,
'failed': failed,
'total': total
}
return {"completed": completed, "failed": failed, "total": total}
async def _find_pending_models_from_filesystem(self) -> List[Dict[str, Any]]:
"""Scan filesystem for checkpoint metadata files with pending hash status."""
@@ -199,21 +278,21 @@ class CheckpointScanner(ModelScanner):
for dirpath, _dirnames, filenames in os.walk(root_path):
for filename in filenames:
if not filename.endswith('.metadata.json'):
if not filename.endswith(".metadata.json"):
continue
metadata_path = os.path.join(dirpath, filename)
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
with open(metadata_path, "r", encoding="utf-8") as f:
data = json.load(f)
# Check if hash is pending
hash_status = data.get('hash_status', 'completed')
sha256 = data.get('sha256', '')
hash_status = data.get("hash_status", "completed")
sha256 = data.get("sha256", "")
if hash_status != 'completed' or not sha256:
if hash_status != "completed" or not sha256:
# Find corresponding model file
model_name = filename.replace('.metadata.json', '')
model_name = filename.replace(".metadata.json", "")
model_path = None
# Look for model file with matching name
@@ -224,29 +303,58 @@ class CheckpointScanner(ModelScanner):
break
if model_path:
pending_models.append({
'file_path': model_path.replace(os.sep, '/'),
'hash_status': hash_status,
'sha256': sha256,
**{k: v for k, v in data.items() if k not in ['file_path', 'hash_status', 'sha256']}
})
pending_models.append(
{
"file_path": model_path.replace(os.sep, "/"),
"hash_status": hash_status,
"sha256": sha256,
**{
k: v
for k, v in data.items()
if k
not in [
"file_path",
"hash_status",
"sha256",
]
},
}
)
except (json.JSONDecodeError, Exception) as e:
logger.debug(f"Error reading metadata file {metadata_path}: {e}")
logger.debug(
f"Error reading metadata file {metadata_path}: {e}"
)
continue
return pending_models
def _resolve_sub_type(self, root_path: Optional[str]) -> Optional[str]:
"""Resolve the sub-type based on the root path."""
"""Resolve the sub-type based on the root path.
Checks both standard ComfyUI paths and LoRA Manager's extra folder paths.
"""
if not root_path:
return None
# Check standard ComfyUI checkpoint paths
if config.checkpoints_roots and root_path in config.checkpoints_roots:
return "checkpoint"
# Check extra checkpoint paths
if (
config.extra_checkpoints_roots
and root_path in config.extra_checkpoints_roots
):
return "checkpoint"
# Check standard ComfyUI unet paths
if config.unet_roots and root_path in config.unet_roots:
return "diffusion_model"
# Check extra unet paths
if config.extra_unet_roots and root_path in config.extra_unet_roots:
return "diffusion_model"
return None
def adjust_metadata(self, metadata, file_path, root_path):

View File

@@ -42,6 +42,7 @@ class CheckpointService(BaseModelService):
"notes": checkpoint_data.get("notes", ""),
"sub_type": sub_type,
"favorite": checkpoint_data.get("favorite", False),
"exclude": bool(checkpoint_data.get("exclude", False)),
"update_available": bool(checkpoint_data.get("update_available", False)),
"skip_metadata_refresh": bool(checkpoint_data.get("skip_metadata_refresh", False)),
"civitai": self.filter_civitai_data(checkpoint_data.get("civitai", {}), minimal=True)

View File

@@ -0,0 +1,430 @@
from __future__ import annotations
import asyncio
import json
import logging
import re
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Set, Tuple
from ..utils.constants import SUPPORTED_DOWNLOAD_SKIP_BASE_MODELS
from .downloader import get_downloader
logger = logging.getLogger(__name__)
class CivitaiBaseModelService:
"""Service for fetching and managing Civitai base models.
This service provides:
- Fetching base models from Civitai API
- Caching with TTL (7 days default)
- Merging hardcoded and remote base models
- Generating abbreviations for new/unknown models
"""
_instance: Optional[CivitaiBaseModelService] = None
_lock = asyncio.Lock()
# Default TTL for cache in seconds (7 days)
DEFAULT_CACHE_TTL = 7 * 24 * 60 * 60
# Civitai API endpoint for enums
CIVITAI_ENUMS_URL = "https://civitai.red/api/v1/enums"
@classmethod
async def get_instance(cls) -> CivitaiBaseModelService:
"""Get singleton instance of the service."""
async with cls._lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
def __init__(self):
"""Initialize the service."""
if hasattr(self, "_initialized"):
return
self._initialized = True
# Cache storage
self._cache: Optional[Dict[str, Any]] = None
self._cache_timestamp: Optional[datetime] = None
self._cache_ttl = self.DEFAULT_CACHE_TTL
# Hardcoded models for fallback
self._hardcoded_models = set(SUPPORTED_DOWNLOAD_SKIP_BASE_MODELS)
logger.info("CivitaiBaseModelService initialized")
async def get_base_models(self, force_refresh: bool = False) -> Dict[str, Any]:
"""Get merged base models (hardcoded + remote).
Args:
force_refresh: If True, fetch from API regardless of cache state.
Returns:
Dictionary containing:
- models: List of merged base model names
- source: 'cache', 'api', or 'fallback'
- last_updated: ISO timestamp of last successful API fetch
- hardcoded_count: Number of hardcoded models
- remote_count: Number of remote models
- merged_count: Total unique models
"""
# Check if cache is valid
if not force_refresh and self._is_cache_valid():
logger.debug("Returning cached base models")
return self._build_response("cache")
# Try to fetch from API
try:
remote_models = await self._fetch_from_civitai()
if remote_models:
self._update_cache(remote_models)
return self._build_response("api")
except Exception as e:
logger.error(f"Failed to fetch base models from Civitai: {e}")
# Fallback to hardcoded models
return self._build_response("fallback")
async def refresh_cache(self) -> Dict[str, Any]:
"""Force refresh the cache from Civitai API.
Returns:
Response dict same as get_base_models()
"""
return await self.get_base_models(force_refresh=True)
def get_cache_status(self) -> Dict[str, Any]:
"""Get current cache status.
Returns:
Dictionary containing:
- has_cache: Whether cache exists
- last_updated: ISO timestamp or None
- is_expired: Whether cache is expired
- ttl_seconds: TTL in seconds
- age_seconds: Age of cache in seconds (if exists)
"""
if self._cache is None or self._cache_timestamp is None:
return {
"has_cache": False,
"last_updated": None,
"is_expired": True,
"ttl_seconds": self._cache_ttl,
"age_seconds": None,
}
age = (datetime.now(timezone.utc) - self._cache_timestamp).total_seconds()
return {
"has_cache": True,
"last_updated": self._cache_timestamp.isoformat(),
"is_expired": age > self._cache_ttl,
"ttl_seconds": self._cache_ttl,
"age_seconds": int(age),
}
def generate_abbreviation(self, model_name: str) -> str:
"""Generate abbreviation for a base model name.
Algorithm:
1. Extract version patterns (e.g., "2.5" from "Wan Video 2.5")
2. Extract main acronym (e.g., "SD" from "SD 1.5")
3. Handle special cases (Flux, Wan, etc.)
4. Fallback to first letters of words (max 4 chars)
Args:
model_name: Full base model name
Returns:
Generated abbreviation (max 4 characters)
"""
if not model_name or not isinstance(model_name, str):
return "OTH"
name = model_name.strip()
if not name:
return "OTH"
# Check if it's already in hardcoded abbreviations
# This is a simplified check - in practice you'd have a mapping
lower_name = name.lower()
# Special cases
special_cases = {
"sd 1.4": "SD1",
"sd 1.5": "SD1",
"sd 1.5 lcm": "SD1",
"sd 1.5 hyper": "SD1",
"sd 2.0": "SD2",
"sd 2.1": "SD2",
"sd 3": "SD3",
"sd 3.5": "SD3",
"sd 3.5 medium": "SD3",
"sd 3.5 large": "SD3",
"sd 3.5 large turbo": "SD3",
"sdxl 1.0": "XL",
"sdxl lightning": "XL",
"sdxl hyper": "XL",
"flux.1 d": "F1D",
"flux.1 s": "F1S",
"flux.1 krea": "F1KR",
"flux.1 kontext": "F1KX",
"flux.2 d": "F2D",
"flux.2 klein 9b": "FK9",
"flux.2 klein 9b-base": "FK9B",
"flux.2 klein 4b": "FK4",
"flux.2 klein 4b-base": "FK4B",
"auraflow": "AF",
"chroma": "CHR",
"pixart a": "PXA",
"pixart e": "PXE",
"hunyuan 1": "HY",
"hunyuan video": "HYV",
"lumina": "L",
"kolors": "KLR",
"noobai": "NAI",
"illustrious": "IL",
"pony": "PONY",
"pony v7": "PNY7",
"hidream": "HID",
"qwen": "QWEN",
"zimageturbo": "ZIT",
"zimagebase": "ZIB",
"anima": "ANI",
"svd": "SVD",
"ltxv": "LTXV",
"ltxv2": "LTV2",
"ltxv 2.3": "LTX",
"cogvideox": "CVX",
"mochi": "MCHI",
"wan video": "WAN",
"wan video 1.3b t2v": "WAN",
"wan video 14b t2v": "WAN",
"wan video 14b i2v 480p": "WAN",
"wan video 14b i2v 720p": "WAN",
"wan video 2.2 ti2v-5b": "WAN",
"wan video 2.2 t2v-a14b": "WAN",
"wan video 2.2 i2v-a14b": "WAN",
"wan video 2.5 t2v": "WAN",
"wan video 2.5 i2v": "WAN",
}
if lower_name in special_cases:
return special_cases[lower_name]
# Try to extract acronym from version pattern
# e.g., "Model Name 2.5" -> "MN25"
version_match = re.search(r"(\d+(?:\.\d+)?)", name)
version = version_match.group(1) if version_match else ""
# Remove version and common words
words = re.sub(r"\d+(?:\.\d+)?", "", name)
words = re.sub(
r"\b(model|video|diffusion|checkpoint|textualinversion)\b",
"",
words,
flags=re.I,
)
words = words.strip()
# Get first letters of remaining words
tokens = re.findall(r"[A-Za-z]+", words)
if tokens:
# Build abbreviation from first letters
abbrev = "".join(token[0].upper() for token in tokens)
# Add version if present
if version:
# Clean version (remove dots for abbreviation)
version_clean = version.replace(".", "")
abbrev = abbrev[: 4 - len(version_clean)] + version_clean
return abbrev[:4]
# Final fallback: just take first 4 alphanumeric chars
alphanumeric = re.sub(r"[^A-Za-z0-9]", "", name)
if alphanumeric:
return alphanumeric[:4].upper()
return "OTH"
async def _fetch_from_civitai(self) -> Optional[Set[str]]:
"""Fetch base models from Civitai API.
Returns:
Set of base model names, or None if failed
"""
try:
downloader = await get_downloader()
success, result = await downloader.make_request(
"GET",
self.CIVITAI_ENUMS_URL,
use_auth=False, # enums endpoint doesn't require auth
)
if not success:
logger.warning(f"Failed to fetch enums from Civitai: {result}")
return None
if isinstance(result, str):
data = json.loads(result)
else:
data = result
# Extract base models from response
base_models = set()
# Use ActiveBaseModel if available (recommended active models)
if "ActiveBaseModel" in data:
base_models.update(data["ActiveBaseModel"])
logger.info(f"Fetched {len(base_models)} models from ActiveBaseModel")
# Fallback to full BaseModel list
elif "BaseModel" in data:
base_models.update(data["BaseModel"])
logger.info(f"Fetched {len(base_models)} models from BaseModel")
else:
logger.warning("No base model data found in Civitai response")
return None
return base_models
except Exception as e:
logger.error(f"Error fetching from Civitai: {e}")
return None
def _update_cache(self, remote_models: Set[str]) -> None:
"""Update internal cache with fetched models.
Args:
remote_models: Set of base model names from API
"""
self._cache = {
"remote_models": sorted(remote_models),
"hardcoded_models": sorted(self._hardcoded_models),
}
self._cache_timestamp = datetime.now(timezone.utc)
logger.info(f"Cache updated with {len(remote_models)} remote models")
def _is_cache_valid(self) -> bool:
"""Check if current cache is valid (not expired).
Returns:
True if cache exists and is not expired
"""
if self._cache is None or self._cache_timestamp is None:
return False
age = (datetime.now(timezone.utc) - self._cache_timestamp).total_seconds()
return age <= self._cache_ttl
def _build_response(self, source: str) -> Dict[str, Any]:
"""Build response dictionary.
Args:
source: 'cache', 'api', or 'fallback'
Returns:
Response dictionary
"""
if source == "fallback" or self._cache is None:
# Use only hardcoded models
merged = sorted(self._hardcoded_models)
return {
"models": merged,
"source": source,
"last_updated": None,
"hardcoded_count": len(self._hardcoded_models),
"remote_count": 0,
"merged_count": len(merged),
}
# Merge hardcoded and remote models
remote_set = set(self._cache.get("remote_models", []))
merged = sorted(self._hardcoded_models | remote_set)
return {
"models": merged,
"source": source,
"last_updated": self._cache_timestamp.isoformat()
if self._cache_timestamp
else None,
"hardcoded_count": len(self._hardcoded_models),
"remote_count": len(remote_set),
"merged_count": len(merged),
}
def get_model_categories(self) -> Dict[str, List[str]]:
"""Get categorized base models.
Returns:
Dictionary mapping category names to lists of model names
"""
# Define category patterns
categories = {
"Stable Diffusion 1.x": ["SD 1.4", "SD 1.5", "SD 1.5 LCM", "SD 1.5 Hyper"],
"Stable Diffusion 2.x": ["SD 2.0", "SD 2.1"],
"Stable Diffusion 3.x": [
"SD 3",
"SD 3.5",
"SD 3.5 Medium",
"SD 3.5 Large",
"SD 3.5 Large Turbo",
],
"SDXL": ["SDXL 1.0", "SDXL Lightning", "SDXL Hyper"],
"Flux Models": [
"Flux.1 D",
"Flux.1 S",
"Flux.1 Krea",
"Flux.1 Kontext",
"Flux.2 D",
"Flux.2 Klein 9B",
"Flux.2 Klein 9B-base",
"Flux.2 Klein 4B",
"Flux.2 Klein 4B-base",
],
"Video Models": [
"SVD",
"LTXV",
"LTXV2",
"LTXV 2.3",
"CogVideoX",
"Mochi",
"Hunyuan Video",
"Wan Video",
"Wan Video 1.3B t2v",
"Wan Video 14B t2v",
"Wan Video 14B i2v 480p",
"Wan Video 14B i2v 720p",
"Wan Video 2.2 TI2V-5B",
"Wan Video 2.2 T2V-A14B",
"Wan Video 2.2 I2V-A14B",
"Wan Video 2.5 T2V",
"Wan Video 2.5 I2V",
],
"Other Models": [
"Illustrious",
"Pony",
"Pony V7",
"HiDream",
"Qwen",
"AuraFlow",
"Chroma",
"ZImageTurbo",
"ZImageBase",
"PixArt a",
"PixArt E",
"Hunyuan 1",
"Lumina",
"Kolors",
"NoobAI",
"Anima",
],
}
return categories
# Convenience function for getting the singleton instance
async def get_civitai_base_model_service() -> CivitaiBaseModelService:
"""Get the singleton instance of CivitaiBaseModelService."""
return await CivitaiBaseModelService.get_instance()

View File

@@ -3,13 +3,22 @@ import copy
import logging
import os
from typing import Any, Optional, Dict, Tuple, List, Sequence
from .model_metadata_provider import CivitaiModelMetadataProvider, ModelMetadataProviderManager
from .connectivity_guard import (
OFFLINE_FRIENDLY_MESSAGE,
is_expected_offline_error,
is_offline_cooldown_error,
)
from .model_metadata_provider import (
CivitaiModelMetadataProvider,
ModelMetadataProviderManager,
)
from .downloader import get_downloader
from .errors import RateLimitError, ResourceNotFoundError
from ..utils.civitai_utils import resolve_license_payload
logger = logging.getLogger(__name__)
class CivitaiClient:
_instance = None
_lock = asyncio.Lock()
@@ -23,17 +32,22 @@ class CivitaiClient:
# Register this client as a metadata provider
provider_manager = await ModelMetadataProviderManager.get_instance()
provider_manager.register_provider('civitai', CivitaiModelMetadataProvider(cls._instance), True)
provider_manager.register_provider(
"civitai", CivitaiModelMetadataProvider(cls._instance), True
)
return cls._instance
def __init__(self):
# Check if already initialized for singleton pattern
if hasattr(self, '_initialized'):
if hasattr(self, "_initialized"):
return
self._initialized = True
self.base_url = "https://civitai.com/api/v1"
self.base_url = "https://civitai.red/api/v1"
def _build_image_info_url(self, image_id: str) -> str:
return f"{self.base_url}/images?imageId={image_id}&nsfw=X"
async def _make_request(
self,
@@ -56,6 +70,8 @@ class CivitaiClient:
if result.provider is None:
result.provider = "civitai_api"
raise result
if not success and is_offline_cooldown_error(result):
return False, OFFLINE_FRIENDLY_MESSAGE
return success, result
@staticmethod
@@ -76,7 +92,9 @@ class CivitaiClient:
if isinstance(meta, dict) and "comfy" in meta:
meta.pop("comfy", None)
async def download_file(self, url: str, save_dir: str, default_filename: str, progress_callback=None) -> Tuple[bool, str]:
async def download_file(
self, url: str, save_dir: str, default_filename: str, progress_callback=None
) -> Tuple[bool, str]:
"""Download file with resumable downloads and retry mechanism
Args:
@@ -97,34 +115,43 @@ class CivitaiClient:
save_path=save_path,
progress_callback=progress_callback,
use_auth=True, # Enable CivitAI authentication
allow_resume=True
allow_resume=True,
)
return success, result
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
async def get_model_by_hash(
self, model_hash: str
) -> Tuple[Optional[Dict], Optional[str]]:
try:
success, version = await self._make_request(
'GET',
"GET",
f"{self.base_url}/model-versions/by-hash/{model_hash}",
use_auth=True
use_auth=True,
)
if not success:
message = str(version)
if is_expected_offline_error(message):
return None, OFFLINE_FRIENDLY_MESSAGE
if "not found" in message.lower():
return None, "Model not found"
logger.error("Failed to fetch model info for %s: %s", model_hash[:10], message)
logger.error(
"Failed to fetch model info for %s: %s", model_hash[:10], message
)
return None, message
model_id = version.get('modelId')
if model_id:
model_data = await self._fetch_model_data(model_id)
if model_data:
self._enrich_version_with_model_data(version, model_data)
if isinstance(version, dict):
model_id = version.get("modelId")
if model_id:
model_data = await self._fetch_model_data(model_id)
if model_data:
self._enrich_version_with_model_data(version, model_data)
self._remove_comfy_metadata(version)
return version, None
self._remove_comfy_metadata(version)
return version, None
else:
return None, "Invalid response format"
except RateLimitError:
raise
except Exception as exc:
@@ -136,16 +163,19 @@ class CivitaiClient:
downloader = await get_downloader()
success, content, headers = await downloader.download_to_memory(
image_url,
use_auth=False # Preview images don't need auth
use_auth=False, # Preview images don't need auth
)
if success:
# Ensure directory exists
os.makedirs(os.path.dirname(save_path), exist_ok=True)
with open(save_path, 'wb') as f:
with open(save_path, "wb") as f:
f.write(content)
return True
return False
except Exception as e:
if is_expected_offline_error(str(e)):
logger.debug("Preview download skipped due to offline state.")
return False
logger.error(f"Download Error: {str(e)}")
return False
@@ -175,20 +205,23 @@ class CivitaiClient:
"""Get all versions of a model with local availability info"""
try:
success, result = await self._make_request(
'GET',
"GET",
f"{self.base_url}/models/{model_id}",
use_auth=True
use_auth=True,
)
if success:
# Also return model type along with versions
return {
'modelVersions': result.get('modelVersions', []),
'type': result.get('type', ''),
'name': result.get('name', '')
"modelVersions": result.get("modelVersions", []),
"type": result.get("type", ""),
"name": result.get("name", ""),
}
message = self._extract_error_message(result)
if message and 'not found' in message.lower():
if message and "not found" in message.lower():
raise ResourceNotFoundError(f"Resource not found for model {model_id}")
if is_expected_offline_error(message):
logger.info("Civitai request skipped: %s", OFFLINE_FRIENDLY_MESSAGE)
return None
if message:
raise RuntimeError(message)
return None
@@ -221,15 +254,15 @@ class CivitaiClient:
try:
query = ",".join(normalized_ids)
success, result = await self._make_request(
'GET',
"GET",
f"{self.base_url}/models",
use_auth=True,
params={'ids': query},
params={"ids": query},
)
if not success:
return None
items = result.get('items') if isinstance(result, dict) else None
items = result.get("items") if isinstance(result, dict) else None
if not isinstance(items, list):
return {}
@@ -237,19 +270,19 @@ class CivitaiClient:
for item in items:
if not isinstance(item, dict):
continue
model_id = item.get('id')
model_id = item.get("id")
try:
normalized_id = int(model_id)
except (TypeError, ValueError):
continue
payload[normalized_id] = {
'modelVersions': item.get('modelVersions', []),
'type': item.get('type', ''),
'name': item.get('name', ''),
'allowNoCredit': item.get('allowNoCredit'),
'allowCommercialUse': item.get('allowCommercialUse'),
'allowDerivatives': item.get('allowDerivatives'),
'allowDifferentLicense': item.get('allowDifferentLicense'),
"modelVersions": item.get("modelVersions", []),
"type": item.get("type", ""),
"name": item.get("name", ""),
"allowNoCredit": item.get("allowNoCredit"),
"allowCommercialUse": item.get("allowCommercialUse"),
"allowDerivatives": item.get("allowDerivatives"),
"allowDifferentLicense": item.get("allowDifferentLicense"),
}
return payload
except RateLimitError:
@@ -258,7 +291,9 @@ class CivitaiClient:
logger.error(f"Error fetching model versions in bulk: {exc}")
return None
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
async def get_model_version(
self, model_id: int = None, version_id: int = None
) -> Optional[Dict]:
"""Get specific model version with additional metadata."""
try:
if model_id is None and version_id is not None:
@@ -281,7 +316,7 @@ class CivitaiClient:
if version is None:
return None
model_id = version.get('modelId')
model_id = version.get("modelId")
if not model_id:
logger.error(f"No modelId found in version {version_id}")
return None
@@ -293,7 +328,9 @@ class CivitaiClient:
self._remove_comfy_metadata(version)
return version
async def _get_version_with_model_id(self, model_id: int, version_id: Optional[int]) -> Optional[Dict]:
async def _get_version_with_model_id(
self, model_id: int, version_id: Optional[int]
) -> Optional[Dict]:
model_data = await self._fetch_model_data(model_id)
if not model_data:
return None
@@ -302,8 +339,12 @@ class CivitaiClient:
if target_version is None:
return None
target_version_id = target_version.get('id')
version = await self._fetch_version_by_id(target_version_id) if target_version_id else None
target_version_id = target_version.get("id")
version = (
await self._fetch_version_by_id(target_version_id)
if target_version_id
else None
)
if version is None:
model_hash = self._extract_primary_model_hash(target_version)
@@ -315,7 +356,9 @@ class CivitaiClient:
)
if version is None:
version = self._build_version_from_model_data(target_version, model_id, model_data)
version = self._build_version_from_model_data(
target_version, model_id, model_data
)
self._enrich_version_with_model_data(version, model_data)
self._remove_comfy_metadata(version)
@@ -323,12 +366,14 @@ class CivitaiClient:
async def _fetch_model_data(self, model_id: int) -> Optional[Dict]:
success, data = await self._make_request(
'GET',
"GET",
f"{self.base_url}/models/{model_id}",
use_auth=True
use_auth=True,
)
if success:
return data
if is_expected_offline_error(data):
return None
logger.warning(f"Failed to fetch model data for model {model_id}")
return None
@@ -337,12 +382,14 @@ class CivitaiClient:
return None
success, version = await self._make_request(
'GET',
"GET",
f"{self.base_url}/model-versions/{version_id}",
use_auth=True
use_auth=True,
)
if success:
return version
if is_expected_offline_error(version):
return None
logger.warning(f"Failed to fetch version by id {version_id}")
return None
@@ -352,26 +399,29 @@ class CivitaiClient:
return None
success, version = await self._make_request(
'GET',
"GET",
f"{self.base_url}/model-versions/by-hash/{model_hash}",
use_auth=True
use_auth=True,
)
if success:
return version
if is_expected_offline_error(version):
return None
logger.warning(f"Failed to fetch version by hash {model_hash}")
return None
def _select_target_version(self, model_data: Dict, model_id: int, version_id: Optional[int]) -> Optional[Dict]:
model_versions = model_data.get('modelVersions', [])
def _select_target_version(
self, model_data: Dict, model_id: int, version_id: Optional[int]
) -> Optional[Dict]:
model_versions = model_data.get("modelVersions", [])
if not model_versions:
logger.warning(f"No model versions found for model {model_id}")
return None
if version_id is not None:
target_version = next(
(item for item in model_versions if item.get('id') == version_id),
None
(item for item in model_versions if item.get("id") == version_id), None
)
if target_version is None:
logger.warning(
@@ -383,41 +433,45 @@ class CivitaiClient:
return model_versions[0]
def _extract_primary_model_hash(self, version_entry: Dict) -> Optional[str]:
for file_info in version_entry.get('files', []):
if file_info.get('type') == 'Model' and file_info.get('primary'):
hashes = file_info.get('hashes', {})
model_hash = hashes.get('SHA256')
for file_info in version_entry.get("files", []):
if file_info.get("type") == "Model" and file_info.get("primary"):
hashes = file_info.get("hashes", {})
model_hash = hashes.get("SHA256")
if model_hash:
return model_hash
return None
def _build_version_from_model_data(self, version_entry: Dict, model_id: int, model_data: Dict) -> Dict:
def _build_version_from_model_data(
self, version_entry: Dict, model_id: int, model_data: Dict
) -> Dict:
version = copy.deepcopy(version_entry)
version.pop('index', None)
version['modelId'] = model_id
version['model'] = {
'name': model_data.get('name'),
'type': model_data.get('type'),
'nsfw': model_data.get('nsfw'),
'poi': model_data.get('poi')
version.pop("index", None)
version["modelId"] = model_id
version["model"] = {
"name": model_data.get("name"),
"type": model_data.get("type"),
"nsfw": model_data.get("nsfw"),
"poi": model_data.get("poi"),
}
return version
def _enrich_version_with_model_data(self, version: Dict, model_data: Dict) -> None:
model_info = version.get('model')
model_info = version.get("model")
if not isinstance(model_info, dict):
model_info = {}
version['model'] = model_info
version["model"] = model_info
model_info['description'] = model_data.get("description")
model_info['tags'] = model_data.get("tags", [])
version['creator'] = model_data.get("creator")
model_info["description"] = model_data.get("description")
model_info["tags"] = model_data.get("tags", [])
version["creator"] = model_data.get("creator")
license_payload = resolve_license_payload(model_data)
for field, value in license_payload.items():
model_info[field] = value
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
async def get_model_version_info(
self, version_id: str
) -> Tuple[Optional[Dict], Optional[str]]:
"""Fetch model version metadata from Civitai
Args:
@@ -431,19 +485,17 @@ class CivitaiClient:
try:
url = f"{self.base_url}/model-versions/{version_id}"
logger.debug(f"Resolving DNS for model version info: {url}")
success, result = await self._make_request(
'GET',
url,
use_auth=True
)
logger.debug("Resolving Civitai model version info: %s", url)
success, result = await self._make_request("GET", url, use_auth=True)
if success:
logger.debug(f"Successfully fetched model version info for: {version_id}")
logger.debug("Successfully fetched model version info for: %s", version_id)
self._remove_comfy_metadata(result)
return result, None
# Handle specific error cases
if is_expected_offline_error(result):
return None, OFFLINE_FRIENDLY_MESSAGE
if "not found" in str(result):
error_msg = f"Model not found"
logger.warning(f"Model version not found: {version_id} - {error_msg}")
@@ -459,36 +511,67 @@ class CivitaiClient:
logger.error(error_msg)
return None, error_msg
async def get_image_info(self, image_id: str) -> Optional[Dict]:
async def get_image_info(
self, image_id: str, source_url: str | None = None
) -> Optional[Dict]:
"""Fetch image information from Civitai API
Args:
image_id: The Civitai image ID
source_url: Original image page URL. Accepted for caller compatibility;
API requests always target ``civitai.red``.
Returns:
Optional[Dict]: The image data or None if not found
"""
try:
url = f"{self.base_url}/images?imageId={image_id}&nsfw=X"
requested_id = int(image_id)
url = self._build_image_info_url(image_id)
success, result = await self._make_request("GET", url, use_auth=True)
logger.debug(f"Fetching image info for ID: {image_id}")
success, result = await self._make_request(
'GET',
url,
use_auth=True
)
if success:
if result and "items" in result and len(result["items"]) > 0:
logger.debug(f"Successfully fetched image info for ID: {image_id}")
return result["items"][0]
logger.warning(f"No image found with ID: {image_id}")
if not success:
if is_expected_offline_error(result):
return None
logger.error(
"Failed to fetch image info for ID %s from civitai.red: %s",
image_id,
result,
)
return None
logger.error(f"Failed to fetch image info for ID: {image_id}: {result}")
if result and "items" in result and isinstance(result["items"], list):
items = result["items"]
for item in items:
if isinstance(item, dict) and item.get("id") == requested_id:
logger.debug(
"Successfully fetched image info for ID %s from civitai.red",
image_id,
)
return item
returned_ids = [
item.get("id")
for item in items
if isinstance(item, dict) and "id" in item
]
logger.warning(
"CivitAI API returned no matching image for requested ID %s from civitai.red. Returned %d item(s) with IDs: %s. This may indicate the image was deleted, hidden, or there is a database lag.",
image_id,
len(items),
returned_ids,
)
return None
logger.warning("No image found with ID: %s", image_id)
return None
except RateLimitError:
raise
except ValueError as e:
error_msg = f"Invalid image ID format: {image_id}"
logger.error(error_msg)
return None
except Exception as e:
error_msg = f"Error fetching image info: {e}"
logger.error(error_msg)
@@ -500,14 +583,17 @@ class CivitaiClient:
return None
try:
url = f"{self.base_url}/models?username={username}"
success, result = await self._make_request(
'GET',
url,
use_auth=True
"GET",
f"{self.base_url}/models",
use_auth=True,
params={"username": username},
)
if not success:
if is_expected_offline_error(result):
logger.info("User model fetch skipped: %s", OFFLINE_FRIENDLY_MESSAGE)
return None
logger.error("Failed to fetch models for %s: %s", username, result)
return None

View File

@@ -0,0 +1,204 @@
"""In-memory connectivity guard to suppress repeated network retries when offline."""
from __future__ import annotations
import asyncio
import errno
import logging
import socket
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Any
import aiohttp
logger = logging.getLogger(__name__)
OFFLINE_COOLDOWN_ERROR = "offline_cooldown"
OFFLINE_FRIENDLY_MESSAGE = "Network offline, will retry automatically later"
def is_offline_cooldown_error(value: Any) -> bool:
"""Return True when a response payload represents guard short-circuit."""
return isinstance(value, str) and value == OFFLINE_COOLDOWN_ERROR
def is_expected_offline_error(value: Any) -> bool:
"""Return True when payload is an expected offline-related result."""
if is_offline_cooldown_error(value):
return True
if not isinstance(value, str):
return False
normalized = value.lower()
return "network offline" in normalized or "offline" in normalized
class ConnectivityGuard:
"""Tracks network failures and gates outbound requests during cooldown."""
_instance: "ConnectivityGuard | None" = None
_instance_lock = asyncio.Lock()
@classmethod
async def get_instance(cls) -> "ConnectivityGuard":
async with cls._instance_lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
def __init__(self) -> None:
if hasattr(self, "_initialized"):
return
self._initialized = True
self._default_destination = "__global__"
self._destination_states: dict[str, _DestinationState] = {
self._default_destination: _DestinationState()
}
self.base_backoff_seconds = 30
self.max_backoff_seconds = 300
self.failure_threshold = 3
@property
def online(self) -> bool:
return self._state_for_destination(None).online
@online.setter
def online(self, value: bool) -> None:
self._state_for_destination(None).online = value
@property
def failure_count(self) -> int:
return self._state_for_destination(None).failure_count
@failure_count.setter
def failure_count(self, value: int) -> None:
self._state_for_destination(None).failure_count = value
@property
def cooldown_until(self) -> datetime | None:
return self._state_for_destination(None).cooldown_until
@cooldown_until.setter
def cooldown_until(self, value: datetime | None) -> None:
self._state_for_destination(None).cooldown_until = value
def _now(self) -> datetime:
return datetime.now()
def _normalize_destination(self, destination: str | None) -> str:
if destination is None or not destination.strip():
return self._default_destination
return destination.lower().strip()
def _state_for_destination(self, destination: str | None) -> "_DestinationState":
destination_key = self._normalize_destination(destination)
if destination_key not in self._destination_states:
self._destination_states[destination_key] = _DestinationState()
return self._destination_states[destination_key]
def in_cooldown(self, destination: str | None = None) -> bool:
state = self._state_for_destination(destination)
if state.cooldown_until is None:
return False
return self._now() < state.cooldown_until
def cooldown_remaining_seconds(self, destination: str | None = None) -> float:
state = self._state_for_destination(destination)
if state.cooldown_until is None:
return 0.0
return max(0.0, (state.cooldown_until - self._now()).total_seconds())
def should_block_request(self, destination: str | None = None) -> bool:
return self.in_cooldown(destination)
def register_success(self, destination: str | None = None) -> None:
destination_key = self._normalize_destination(destination)
state = self._state_for_destination(destination_key)
was_offline = (not state.online) or state.cooldown_until is not None
state.online = True
state.failure_count = 0
state.cooldown_until = None
if was_offline:
logger.info(
"Connectivity restored for destination '%s'; requests resumed.",
destination_key,
)
def register_network_failure(
self, exc: Exception, destination: str | None = None
) -> None:
destination_key = self._normalize_destination(destination)
state = self._state_for_destination(destination_key)
state.online = False
state.failure_count += 1
if state.failure_count < self.failure_threshold:
logger.debug(
"Network failure tracked for destination '%s' (%d/%d): %s",
destination_key,
state.failure_count,
self.failure_threshold,
exc,
)
return
retry_step = state.failure_count - self.failure_threshold
backoff = min(
self.max_backoff_seconds,
self.base_backoff_seconds * (2**retry_step),
)
should_log_warning = not self.in_cooldown(destination_key)
state.cooldown_until = self._now() + timedelta(seconds=backoff)
if should_log_warning:
logger.warning(
"Connectivity offline for destination '%s'; enter cooldown for %ss after %d network failures.",
destination_key,
int(backoff),
state.failure_count,
)
else:
logger.debug(
"Cooldown still active for destination '%s'; failure_count=%d, backoff=%ss.",
destination_key,
state.failure_count,
int(backoff),
)
@staticmethod
def is_network_unreachable_error(exc: Exception) -> bool:
"""Return whether the exception should count as connectivity failure."""
if isinstance(exc, asyncio.CancelledError):
return False
if isinstance(
exc,
(
asyncio.TimeoutError,
TimeoutError,
ConnectionRefusedError,
socket.gaierror,
aiohttp.ServerTimeoutError,
aiohttp.ConnectionTimeoutError,
aiohttp.ClientConnectorError,
aiohttp.ClientConnectionError,
),
):
return True
if isinstance(exc, OSError) and exc.errno in {
errno.ENETUNREACH,
errno.EHOSTUNREACH,
errno.ETIMEDOUT,
errno.ECONNREFUSED,
}:
return True
return False
@dataclass
class _DestinationState:
online: bool = True
failure_count: int = 0
cooldown_until: datetime | None = None

View File

@@ -7,11 +7,13 @@ with category filtering and enriched results including post counts.
from __future__ import annotations
import logging
import re
from typing import List, Dict, Any, Optional
logger = logging.getLogger(__name__)
_EMBEDDED_COMMAND_PATTERN = re.compile(r"\s/\w")
class CustomWordsService:
"""Service for autocomplete via TagFTSIndex.
@@ -49,6 +51,7 @@ class CustomWordsService:
if self._tag_index is None:
try:
from .tag_fts_index import get_tag_fts_index
self._tag_index = get_tag_fts_index()
except Exception as e:
logger.warning(f"Failed to initialize TagFTSIndex: {e}")
@@ -59,14 +62,16 @@ class CustomWordsService:
self,
search_term: str,
limit: int = 20,
offset: int = 0,
categories: Optional[List[int]] = None,
enriched: bool = False
enriched: bool = False,
) -> List[Dict[str, Any]]:
"""Search tags using TagFTSIndex with category filtering.
Args:
search_term: The search term to match against.
limit: Maximum number of results to return.
offset: Number of results to skip.
categories: Optional list of category IDs to filter by.
enriched: If True, always return enriched results with category
and post_count (default behavior now).
@@ -74,10 +79,28 @@ class CustomWordsService:
Returns:
List of dicts with tag_name, category, and post_count.
"""
normalized_search = search_term.strip()
if not normalized_search:
return []
# Prompt widgets should only send the active token, but guard against
# accidental full-prompt queries reaching the FTS path.
if (
"__" in normalized_search
or "," in normalized_search
or ">" in normalized_search
or "\n" in normalized_search
or "\r" in normalized_search
or _EMBEDDED_COMMAND_PATTERN.search(normalized_search)
):
logger.debug("Skipping prompt-like custom words query: %s", normalized_search)
return []
tag_index = self._get_tag_index()
if tag_index is not None:
results = tag_index.search(search_term, categories=categories, limit=limit)
return results
return tag_index.search(
normalized_search, categories=categories, limit=limit, offset=offset
)
logger.debug("TagFTSIndex not available, returning empty results")
return []

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,320 @@
from __future__ import annotations
import asyncio
import logging
import os
import sqlite3
import time
from typing import Iterable, Mapping, Optional, Sequence
from ..utils.cache_paths import get_cache_base_dir
from .settings_manager import get_settings_manager
logger = logging.getLogger(__name__)
def _normalize_model_type(model_type: str | None) -> Optional[str]:
if not isinstance(model_type, str):
return None
normalized = model_type.strip().lower()
if normalized in {"lora", "locon", "dora"}:
return "lora"
if normalized == "checkpoint":
return "checkpoint"
if normalized in {"embedding", "textualinversion"}:
return "embedding"
return None
def _normalize_int(value) -> Optional[int]:
try:
if value is None:
return None
return int(value)
except (TypeError, ValueError):
return None
def _resolve_database_path() -> str:
base_dir = get_cache_base_dir(create=True)
history_dir = os.path.join(base_dir, "download_history")
os.makedirs(history_dir, exist_ok=True)
return os.path.join(history_dir, "downloaded_versions.sqlite")
class DownloadedVersionHistoryService:
_SCHEMA = """
CREATE TABLE IF NOT EXISTS downloaded_model_versions (
model_type TEXT NOT NULL,
version_id INTEGER NOT NULL,
model_id INTEGER,
first_seen_at REAL NOT NULL,
last_seen_at REAL NOT NULL,
source TEXT NOT NULL,
last_file_path TEXT,
last_library_name TEXT,
is_deleted_override INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY (model_type, version_id)
);
CREATE INDEX IF NOT EXISTS idx_downloaded_model_versions_model
ON downloaded_model_versions(model_type, model_id);
"""
def __init__(self, db_path: str | None = None, *, settings_manager=None) -> None:
self._db_path = db_path or _resolve_database_path()
self._settings = settings_manager or get_settings_manager()
self._lock = asyncio.Lock()
self._conn: sqlite3.Connection | None = None
self._schema_initialized = False
self._ensure_directory()
self._initialize_schema()
def _ensure_directory(self) -> None:
directory = os.path.dirname(self._db_path)
if directory:
os.makedirs(directory, exist_ok=True)
def _connect(self) -> sqlite3.Connection:
conn = sqlite3.connect(self._db_path, check_same_thread=False)
conn.row_factory = sqlite3.Row
return conn
def _get_conn(self) -> sqlite3.Connection:
if self._conn is None:
self._conn = sqlite3.connect(self._db_path, check_same_thread=False)
self._conn.row_factory = sqlite3.Row
return self._conn
def _initialize_schema(self) -> None:
if self._schema_initialized:
return
with self._connect() as conn:
conn.executescript(self._SCHEMA)
conn.commit()
self._schema_initialized = True
def get_database_path(self) -> str:
return self._db_path
def _get_active_library_name(self) -> str | None:
try:
value = self._settings.get_active_library_name()
except Exception:
return None
return value or None
async def mark_downloaded(
self,
model_type: str,
version_id: int,
*,
model_id: int | None = None,
source: str = "manual",
file_path: str | None = None,
library_name: str | None = None,
) -> None:
normalized_type = _normalize_model_type(model_type)
normalized_version_id = _normalize_int(version_id)
normalized_model_id = _normalize_int(model_id)
if normalized_type is None or normalized_version_id is None:
return
active_library_name = library_name or self._get_active_library_name()
timestamp = time.time()
async with self._lock:
conn = self._get_conn()
conn.execute(
"""
INSERT INTO downloaded_model_versions (
model_type, version_id, model_id, first_seen_at, last_seen_at,
source, last_file_path, last_library_name, is_deleted_override
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 0)
ON CONFLICT(model_type, version_id) DO UPDATE SET
model_id = COALESCE(excluded.model_id, downloaded_model_versions.model_id),
last_seen_at = excluded.last_seen_at,
source = excluded.source,
last_file_path = COALESCE(excluded.last_file_path, downloaded_model_versions.last_file_path),
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
is_deleted_override = 0
""",
(
normalized_type,
normalized_version_id,
normalized_model_id,
timestamp,
timestamp,
source,
file_path,
active_library_name,
),
)
conn.commit()
async def mark_downloaded_bulk(
self,
model_type: str,
records: Sequence[Mapping[str, object]],
*,
source: str = "scan",
library_name: str | None = None,
) -> None:
normalized_type = _normalize_model_type(model_type)
if normalized_type is None or not records:
return
timestamp = time.time()
active_library_name = library_name or self._get_active_library_name()
payload: list[tuple[object, ...]] = []
for record in records:
version_id = _normalize_int(record.get("version_id"))
if version_id is None:
continue
payload.append(
(
normalized_type,
version_id,
_normalize_int(record.get("model_id")),
timestamp,
timestamp,
source,
record.get("file_path"),
active_library_name,
)
)
if not payload:
return
async with self._lock:
conn = self._get_conn()
conn.executemany(
"""
INSERT INTO downloaded_model_versions (
model_type, version_id, model_id, first_seen_at, last_seen_at,
source, last_file_path, last_library_name, is_deleted_override
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 0)
ON CONFLICT(model_type, version_id) DO UPDATE SET
model_id = COALESCE(excluded.model_id, downloaded_model_versions.model_id),
last_seen_at = excluded.last_seen_at,
source = excluded.source,
last_file_path = COALESCE(excluded.last_file_path, downloaded_model_versions.last_file_path),
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
is_deleted_override = 0
""",
payload,
)
conn.commit()
async def mark_not_downloaded(self, model_type: str, version_id: int) -> None:
normalized_type = _normalize_model_type(model_type)
normalized_version_id = _normalize_int(version_id)
if normalized_type is None or normalized_version_id is None:
return
timestamp = time.time()
async with self._lock:
conn = self._get_conn()
conn.execute(
"""
INSERT INTO downloaded_model_versions (
model_type, version_id, model_id, first_seen_at, last_seen_at,
source, last_file_path, last_library_name, is_deleted_override
) VALUES (?, ?, NULL, ?, ?, 'manual', NULL, ?, 1)
ON CONFLICT(model_type, version_id) DO UPDATE SET
last_seen_at = excluded.last_seen_at,
source = excluded.source,
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
is_deleted_override = 1
""",
(
normalized_type,
normalized_version_id,
timestamp,
timestamp,
self._get_active_library_name(),
),
)
conn.commit()
async def has_been_downloaded(self, model_type: str, version_id: int) -> bool:
normalized_type = _normalize_model_type(model_type)
normalized_version_id = _normalize_int(version_id)
if normalized_type is None or normalized_version_id is None:
return False
async with self._lock:
conn = self._get_conn()
row = conn.execute(
"""
SELECT is_deleted_override
FROM downloaded_model_versions
WHERE model_type = ? AND version_id = ?
""",
(normalized_type, normalized_version_id),
).fetchone()
return bool(row) and not bool(row["is_deleted_override"])
async def get_downloaded_version_ids(
self, model_type: str, model_id: int
) -> list[int]:
normalized_type = _normalize_model_type(model_type)
normalized_model_id = _normalize_int(model_id)
if normalized_type is None or normalized_model_id is None:
return []
async with self._lock:
conn = self._get_conn()
rows = conn.execute(
"""
SELECT version_id
FROM downloaded_model_versions
WHERE model_type = ? AND model_id = ? AND is_deleted_override = 0
ORDER BY version_id ASC
""",
(normalized_type, normalized_model_id),
).fetchall()
return [int(row["version_id"]) for row in rows]
async def get_downloaded_version_ids_bulk(
self, model_type: str, model_ids: Iterable[int]
) -> dict[int, set[int]]:
normalized_type = _normalize_model_type(model_type)
if normalized_type is None:
return {}
normalized_model_ids = sorted(
{
value
for value in (_normalize_int(model_id) for model_id in model_ids)
if value is not None
}
)
if not normalized_model_ids:
return {}
placeholders = ", ".join(["?"] * len(normalized_model_ids))
params: list[object] = [normalized_type, *normalized_model_ids]
async with self._lock:
conn = self._get_conn()
rows = conn.execute(
f"""
SELECT model_id, version_id
FROM downloaded_model_versions
WHERE model_type = ?
AND model_id IN ({placeholders})
AND is_deleted_override = 0
""",
params,
).fetchall()
result: dict[int, set[int]] = {}
for row in rows:
model_id = _normalize_int(row["model_id"])
version_id = _normalize_int(row["version_id"])
if model_id is None or version_id is None:
continue
result.setdefault(model_id, set()).add(version_id)
return result

View File

@@ -18,8 +18,14 @@ from collections import deque
from dataclasses import dataclass
from datetime import datetime, timedelta
from email.utils import parsedate_to_datetime
from urllib.parse import urlparse
from typing import Optional, Dict, Tuple, Callable, Union, Awaitable
from ..services.settings_manager import get_settings_manager
from .connectivity_guard import (
OFFLINE_COOLDOWN_ERROR,
OFFLINE_FRIENDLY_MESSAGE,
ConnectivityGuard,
)
from .errors import RateLimitError
logger = logging.getLogger(__name__)
@@ -44,7 +50,9 @@ class DownloadStreamControl:
self._event.set()
self._reconnect_requested = False
self.last_progress_timestamp: Optional[float] = None
self.stall_timeout: float = float(stall_timeout) if stall_timeout is not None else 120.0
self.stall_timeout: float = (
float(stall_timeout) if stall_timeout is not None else 120.0
)
def is_set(self) -> bool:
return self._event.is_set()
@@ -85,7 +93,9 @@ class DownloadStreamControl:
self.last_progress_timestamp = timestamp or datetime.now().timestamp()
self._reconnect_requested = False
def time_since_last_progress(self, *, now: Optional[float] = None) -> Optional[float]:
def time_since_last_progress(
self, *, now: Optional[float] = None
) -> Optional[float]:
if self.last_progress_timestamp is None:
return None
reference = now if now is not None else datetime.now().timestamp()
@@ -120,7 +130,7 @@ class Downloader:
def __init__(self):
"""Initialize the downloader with optimal settings"""
# Check if already initialized for singleton pattern
if hasattr(self, '_initialized'):
if hasattr(self, "_initialized"):
return
self._initialized = True
@@ -131,18 +141,20 @@ class Downloader:
self._session_lock = asyncio.Lock()
# Configuration
self.chunk_size = 4 * 1024 * 1024 # 4MB chunks for better throughput
self.max_retries = 5
self.chunk_size = (
16 * 1024 * 1024
) # 16MB chunks to balance I/O reduction and memory usage
self.max_retries = self._resolve_max_retries()
self.base_delay = 2.0 # Base delay for exponential backoff
self.session_timeout = 300 # 5 minutes
self.stall_timeout = self._resolve_stall_timeout()
# Default headers
self.default_headers = {
'User-Agent': 'ComfyUI-LoRA-Manager/1.0',
"User-Agent": "ComfyUI-LoRA-Manager/1.0",
# Explicitly request uncompressed payloads so aiohttp doesn't need optional
# decoders (e.g. zstandard) that may be missing in runtime environments.
'Accept-Encoding': 'identity',
"Accept-Encoding": "identity",
}
@property
@@ -158,7 +170,7 @@ class Downloader:
@property
def proxy_url(self) -> Optional[str]:
"""Get the current proxy URL (initialize if needed)"""
if not hasattr(self, '_proxy_url'):
if not hasattr(self, "_proxy_url"):
self._proxy_url = None
return self._proxy_url
@@ -169,14 +181,14 @@ class Downloader:
try:
settings_manager = get_settings_manager()
settings_timeout = settings_manager.get('download_stall_timeout_seconds')
settings_timeout = settings_manager.get("download_stall_timeout_seconds")
except Exception as exc: # pragma: no cover - defensive guard
logger.debug("Failed to read stall timeout from settings: %s", exc)
raw_value = (
settings_timeout
if settings_timeout not in (None, "")
else os.environ.get('COMFYUI_DOWNLOAD_STALL_TIMEOUT')
else os.environ.get("COMFYUI_DOWNLOAD_STALL_TIMEOUT")
)
try:
@@ -186,16 +198,30 @@ class Downloader:
return max(30.0, timeout_value)
def _resolve_max_retries(self) -> int:
"""Determine max retry count from environment while preserving defaults."""
default_retries = 5
raw_value = os.environ.get("COMFYUI_DOWNLOAD_MAX_RETRIES")
try:
retries = int(raw_value)
except (TypeError, ValueError):
retries = default_retries
return max(0, retries)
def _should_refresh_session(self) -> bool:
"""Check if session should be refreshed"""
if self._session is None:
return True
if not hasattr(self, '_session_created_at') or self._session_created_at is None:
if not hasattr(self, "_session_created_at") or self._session_created_at is None:
return True
# Refresh if session is older than timeout
if (datetime.now() - self._session_created_at).total_seconds() > self.session_timeout:
if (
datetime.now() - self._session_created_at
).total_seconds() > self.session_timeout:
return True
return False
@@ -209,7 +235,7 @@ class Downloader:
if self._session is not None:
try:
await self._session.close()
except Exception as e: # pragma: no cover
except Exception as e: # pragma: no cover
logger.warning(f"Error closing previous session: {e}")
finally:
self._session = None
@@ -217,12 +243,12 @@ class Downloader:
# Check for app-level proxy settings
proxy_url = None
settings_manager = get_settings_manager()
if settings_manager.get('proxy_enabled', False):
proxy_host = settings_manager.get('proxy_host', '').strip()
proxy_port = settings_manager.get('proxy_port', '').strip()
proxy_type = settings_manager.get('proxy_type', 'http').lower()
proxy_username = settings_manager.get('proxy_username', '').strip()
proxy_password = settings_manager.get('proxy_password', '').strip()
if settings_manager.get("proxy_enabled", False):
proxy_host = settings_manager.get("proxy_host", "").strip()
proxy_port = settings_manager.get("proxy_port", "").strip()
proxy_type = settings_manager.get("proxy_type", "http").lower()
proxy_username = settings_manager.get("proxy_username", "").strip()
proxy_password = settings_manager.get("proxy_password", "").strip()
if proxy_host and proxy_port:
# Build proxy URL
@@ -231,37 +257,46 @@ class Downloader:
else:
proxy_url = f"{proxy_type}://{proxy_host}:{proxy_port}"
logger.debug(f"Using app-level proxy: {proxy_type}://{proxy_host}:{proxy_port}")
logger.debug(
f"Using app-level proxy: {proxy_type}://{proxy_host}:{proxy_port}"
)
logger.debug("Proxy mode: app-level proxy is active.")
else:
logger.debug("Proxy mode: system-level proxy (trust_env) will be used if configured in environment.")
logger.debug(
"Proxy mode: system-level proxy (trust_env) will be used if configured in environment."
)
# Optimize TCP connection parameters
connector = aiohttp.TCPConnector(
ssl=True,
limit=8, # Concurrent connections
ttl_dns_cache=300, # DNS cache timeout
force_close=False, # Keep connections for reuse
enable_cleanup_closed=True
enable_cleanup_closed=True,
)
# Configure timeout parameters
timeout = aiohttp.ClientTimeout(
total=None, # No total timeout for large downloads
connect=60, # Connection timeout
sock_read=300 # 5 minute socket read timeout
sock_read=300, # 5 minute socket read timeout
)
self._session = aiohttp.ClientSession(
connector=connector,
trust_env=proxy_url is None, # Only use system proxy if no app-level proxy is set
timeout=timeout
trust_env=proxy_url
is None, # Only use system proxy if no app-level proxy is set
timeout=timeout,
)
# Store proxy URL for use in requests
self._proxy_url = proxy_url
self._session_created_at = datetime.now()
logger.debug("Created new HTTP session with proxy settings. App-level proxy: %s, System-level proxy (trust_env): %s", bool(proxy_url), proxy_url is None)
logger.debug(
"Created new HTTP session with proxy settings. App-level proxy: %s, System-level proxy (trust_env): %s",
bool(proxy_url),
proxy_url is None,
)
def _get_auth_headers(self, use_auth: bool = False) -> Dict[str, str]:
"""Get headers with optional authentication"""
@@ -270,10 +305,10 @@ class Downloader:
if use_auth:
# Add CivitAI API key if available
settings_manager = get_settings_manager()
api_key = settings_manager.get('civitai_api_key')
api_key = settings_manager.get("civitai_api_key")
if api_key:
headers['Authorization'] = f'Bearer {api_key}'
headers['Content-Type'] = 'application/json'
headers["Authorization"] = f"Bearer {api_key}"
headers["Content-Type"] = "application/json"
return headers
@@ -303,7 +338,7 @@ class Downloader:
Tuple[bool, str]: (success, save_path or error message)
"""
retry_count = 0
part_path = save_path + '.part' if allow_resume else save_path
part_path = save_path + ".part" if allow_resume else save_path
# Prepare headers
headers = self._get_auth_headers(use_auth)
@@ -317,56 +352,95 @@ class Downloader:
logger.info(f"Resuming download from offset {resume_offset} bytes")
total_size = 0
range_redirect_retry_urls: set[str] = set()
while retry_count <= self.max_retries:
try:
session = await self.session
# Debug log for proxy mode at request time
if self.proxy_url:
logger.debug(f"[download_file] Using app-level proxy: {self.proxy_url}")
logger.debug(
f"[download_file] Using app-level proxy: {self.proxy_url}"
)
else:
logger.debug("[download_file] Using system-level proxy (trust_env) if configured.")
logger.debug(
"[download_file] Using system-level proxy (trust_env) if configured."
)
# Add Range header for resume if we have partial data
request_headers = headers.copy()
if allow_resume and resume_offset > 0:
request_headers['Range'] = f'bytes={resume_offset}-'
request_headers["Range"] = f"bytes={resume_offset}-"
# Disable compression for better chunked downloads
request_headers['Accept-Encoding'] = 'identity'
request_headers["Accept-Encoding"] = "identity"
logger.debug(f"Download attempt {retry_count + 1}/{self.max_retries + 1} from: {url}")
logger.debug(
f"Download attempt {retry_count + 1}/{self.max_retries + 1} from: {url}"
)
if resume_offset > 0:
logger.debug(f"Requesting range from byte {resume_offset}")
async with session.get(url, headers=request_headers, allow_redirects=True, proxy=self.proxy_url) as response:
async with session.get(
url,
headers=request_headers,
allow_redirects=True,
proxy=self.proxy_url,
) as response:
# Handle different response codes
if response.status == 200:
# Full content response
if resume_offset > 0:
redirected_url = str(response.url)
if (
allow_resume
and response.history
and redirected_url
and redirected_url != url
and redirected_url not in range_redirect_retry_urls
):
range_redirect_retry_urls.add(redirected_url)
logger.info(
"Range request was not honored after redirect; retrying final URL directly: %s",
redirected_url,
)
url = redirected_url
response.release()
continue
# Server doesn't support ranges, restart from beginning
logger.warning("Server doesn't support range requests, restarting download")
logger.warning(
"Server doesn't support range requests, restarting download"
)
resume_offset = 0
if os.path.exists(part_path):
os.remove(part_path)
elif response.status == 206:
# Partial content response (resume successful)
content_range = response.headers.get('Content-Range')
content_range = response.headers.get("Content-Range")
if content_range:
# Parse total size from Content-Range header (e.g., "bytes 1024-2047/2048")
range_parts = content_range.split('/')
range_parts = content_range.split("/")
if len(range_parts) == 2:
total_size = int(range_parts[1])
logger.info(f"Successfully resumed download from byte {resume_offset}")
logger.info(
f"Successfully resumed download from byte {resume_offset}"
)
elif response.status == 416:
# Range not satisfiable - file might be complete or corrupted
if allow_resume and os.path.exists(part_path):
part_size = os.path.getsize(part_path)
logger.warning(f"Range not satisfiable. Part file size: {part_size}")
logger.warning(
f"Range not satisfiable. Part file size: {part_size}"
)
# Try to get actual file size
head_response = await session.head(url, headers=headers, proxy=self.proxy_url)
head_response = await session.head(
url, headers=headers, proxy=self.proxy_url
)
if head_response.status == 200:
actual_size = int(head_response.headers.get('content-length', 0))
actual_size = int(
head_response.headers.get("content-length", 0)
)
if part_size == actual_size:
# File is complete, just rename it
if allow_resume:
@@ -388,21 +462,36 @@ class Downloader:
resume_offset = 0
continue
elif response.status == 401:
logger.warning(f"Unauthorized access to resource: {url} (Status 401)")
return False, "Invalid or missing API key, or early access restriction."
logger.warning(
f"Unauthorized access to resource: {url} (Status 401)"
)
return (
False,
"Invalid or missing API key, or early access restriction.",
)
elif response.status == 403:
logger.warning(f"Forbidden access to resource: {url} (Status 403)")
return False, "Access forbidden: You don't have permission to download this file."
logger.warning(
f"Forbidden access to resource: {url} (Status 403)"
)
return (
False,
"Access forbidden: You don't have permission to download this file.",
)
elif response.status == 404:
logger.warning(f"Resource not found: {url} (Status 404)")
return False, "File not found - the download link may be invalid or expired."
return (
False,
"File not found - the download link may be invalid or expired.",
)
else:
logger.error(f"Download failed for {url} with status {response.status}")
logger.error(
f"Download failed for {url} with status {response.status}"
)
return False, f"Download failed with status {response.status}"
# Get total file size for progress calculation (if not set from Content-Range)
if total_size == 0:
total_size = int(response.headers.get('content-length', 0))
total_size = int(response.headers.get("content-length", 0))
if response.status == 206:
# For partial content, add the offset to get total file size
total_size += resume_offset
@@ -417,7 +506,7 @@ class Downloader:
# Stream download to file with progress updates
loop = asyncio.get_running_loop()
mode = 'ab' if (allow_resume and resume_offset > 0) else 'wb'
mode = "ab" if (allow_resume and resume_offset > 0) else "wb"
control = pause_event
if control is not None:
@@ -425,7 +514,9 @@ class Downloader:
with open(part_path, mode) as f:
while True:
active_stall_timeout = control.stall_timeout if control else self.stall_timeout
active_stall_timeout = (
control.stall_timeout if control else self.stall_timeout
)
if control is not None:
if control.is_paused():
@@ -437,7 +528,9 @@ class Downloader:
"Reconnect requested after resume"
)
elif control.consume_reconnect_request():
raise DownloadRestartRequested("Reconnect requested")
raise DownloadRestartRequested(
"Reconnect requested"
)
try:
chunk = await asyncio.wait_for(
@@ -466,22 +559,32 @@ class Downloader:
control.mark_progress(timestamp=now.timestamp())
# Limit progress update frequency to reduce overhead
time_diff = (now - last_progress_report_time).total_seconds()
time_diff = (
now - last_progress_report_time
).total_seconds()
if progress_callback and time_diff >= 1.0:
progress_samples.append((now, current_size))
cutoff = now - timedelta(seconds=5)
while progress_samples and progress_samples[0][0] < cutoff:
while (
progress_samples and progress_samples[0][0] < cutoff
):
progress_samples.popleft()
percent = (current_size / total_size) * 100 if total_size else 0.0
percent = (
(current_size / total_size) * 100
if total_size
else 0.0
)
bytes_per_second = 0.0
if len(progress_samples) >= 2:
first_time, first_bytes = progress_samples[0]
last_time, last_bytes = progress_samples[-1]
elapsed = (last_time - first_time).total_seconds()
if elapsed > 0:
bytes_per_second = (last_bytes - first_bytes) / elapsed
bytes_per_second = (
last_bytes - first_bytes
) / elapsed
progress_snapshot = DownloadProgress(
percent_complete=percent,
@@ -491,48 +594,66 @@ class Downloader:
timestamp=now.timestamp(),
)
await self._dispatch_progress_callback(progress_callback, progress_snapshot)
await self._dispatch_progress_callback(
progress_callback, progress_snapshot
)
last_progress_report_time = now
# Download completed successfully
# Verify file size integrity before finalizing
final_size = os.path.getsize(part_path) if os.path.exists(part_path) else 0
final_size = (
os.path.getsize(part_path) if os.path.exists(part_path) else 0
)
expected_size = total_size if total_size > 0 else None
integrity_error: Optional[str] = None
resumable_incomplete = False
if final_size <= 0:
integrity_error = "Downloaded file is empty"
elif expected_size is not None and final_size != expected_size:
integrity_error = (
f"File size mismatch. Expected: {expected_size}, Got: {final_size}"
integrity_error = f"File size mismatch. Expected: {expected_size}, Got: {final_size}"
resumable_incomplete = (
allow_resume
and part_path != save_path
and final_size > 0
and final_size < expected_size
)
if integrity_error is not None:
logger.error(
log_fn = logger.warning if resumable_incomplete else logger.error
log_fn(
"Download integrity check failed for %s: %s",
save_path,
integrity_error,
)
# Remove the corrupted payload so future attempts start fresh
if os.path.exists(part_path):
try:
os.remove(part_path)
except OSError as remove_error:
logger.warning(
"Failed to delete corrupted download %s: %s",
part_path,
remove_error,
)
if part_path != save_path and os.path.exists(save_path):
try:
os.remove(save_path)
except OSError as remove_error:
logger.warning(
"Failed to delete target file %s after integrity error: %s",
save_path,
remove_error,
)
if resumable_incomplete:
logger.info(
"Preserving incomplete download for resume: %s (%s/%s bytes)",
part_path,
final_size,
expected_size,
)
else:
# Remove corrupted payloads that cannot be safely resumed.
if os.path.exists(part_path):
try:
os.remove(part_path)
except OSError as remove_error:
logger.warning(
"Failed to delete corrupted download %s: %s",
part_path,
remove_error,
)
if part_path != save_path and os.path.exists(save_path):
try:
os.remove(save_path)
except OSError as remove_error:
logger.warning(
"Failed to delete target file %s after integrity error: %s",
save_path,
remove_error,
)
retry_count += 1
if retry_count <= self.max_retries:
@@ -542,8 +663,16 @@ class Downloader:
delay,
)
await asyncio.sleep(delay)
resume_offset = 0
total_size = 0
if resumable_incomplete and os.path.exists(part_path):
resume_offset = os.path.getsize(part_path)
total_size = expected_size or 0
logger.info(
"Will resume incomplete download from byte %s",
resume_offset,
)
else:
resume_offset = 0
total_size = 0
await self._create_session()
continue
@@ -555,7 +684,9 @@ class Downloader:
rename_attempt = 0
rename_success = False
while rename_attempt < max_rename_attempts and not rename_success:
while (
rename_attempt < max_rename_attempts and not rename_success
):
try:
# If the destination file exists, remove it first (Windows safe)
if os.path.exists(save_path):
@@ -566,11 +697,18 @@ class Downloader:
except PermissionError as e:
rename_attempt += 1
if rename_attempt < max_rename_attempts:
logger.info(f"File still in use, retrying rename in 2 seconds (attempt {rename_attempt}/{max_rename_attempts})")
logger.info(
f"File still in use, retrying rename in 2 seconds (attempt {rename_attempt}/{max_rename_attempts})"
)
await asyncio.sleep(2)
else:
logger.error(f"Failed to rename file after {max_rename_attempts} attempts: {e}")
return False, f"Failed to finalize download: {str(e)}"
logger.error(
f"Failed to rename file after {max_rename_attempts} attempts: {e}"
)
return (
False,
f"Failed to finalize download: {str(e)}",
)
final_size = os.path.getsize(save_path)
@@ -583,8 +721,9 @@ class Downloader:
bytes_per_second=0.0,
timestamp=datetime.now().timestamp(),
)
await self._dispatch_progress_callback(progress_callback, final_snapshot)
await self._dispatch_progress_callback(
progress_callback, final_snapshot
)
return True, save_path
@@ -597,7 +736,9 @@ class Downloader:
DownloadRestartRequested,
) as e:
retry_count += 1
logger.warning(f"Network error during download (attempt {retry_count}/{self.max_retries + 1}): {e}")
logger.warning(
f"Network error during download (attempt {retry_count}/{self.max_retries + 1}): {e}"
)
if retry_count <= self.max_retries:
# Calculate delay with exponential backoff
@@ -615,7 +756,10 @@ class Downloader:
continue
else:
logger.error(f"Max retries exceeded for download: {e}")
return False, f"Network error after {self.max_retries + 1} attempts: {str(e)}"
return (
False,
f"Network error after {self.max_retries + 1} attempts: {str(e)}",
)
except Exception as e:
logger.error(f"Unexpected download error: {e}")
@@ -645,7 +789,7 @@ class Downloader:
url: str,
use_auth: bool = False,
custom_headers: Optional[Dict[str, str]] = None,
return_headers: bool = False
return_headers: bool = False,
) -> Tuple[bool, Union[bytes, str], Optional[Dict]]:
"""
Download a file to memory (for small files like preview images)
@@ -659,22 +803,34 @@ class Downloader:
Returns:
Tuple[bool, Union[bytes, str], Optional[Dict]]: (success, content or error message, response headers if requested)
"""
guard = await ConnectivityGuard.get_instance()
destination = self._guard_destination(url)
if guard.should_block_request(destination):
return False, OFFLINE_FRIENDLY_MESSAGE, None
try:
session = await self.session
# Debug log for proxy mode at request time
if self.proxy_url:
logger.debug(f"[download_to_memory] Using app-level proxy: {self.proxy_url}")
logger.debug(
f"[download_to_memory] Using app-level proxy: {self.proxy_url}"
)
else:
logger.debug("[download_to_memory] Using system-level proxy (trust_env) if configured.")
logger.debug(
"[download_to_memory] Using system-level proxy (trust_env) if configured."
)
# Prepare headers
headers = self._get_auth_headers(use_auth)
if custom_headers:
headers.update(custom_headers)
async with session.get(url, headers=headers, proxy=self.proxy_url) as response:
async with session.get(
url, headers=headers, proxy=self.proxy_url
) as response:
if response.status == 200:
content = await response.read()
guard.register_success(destination)
if return_headers:
return True, content, dict(response.headers)
else:
@@ -693,6 +849,12 @@ class Downloader:
return False, error_msg, None
except Exception as e:
if guard.is_network_unreachable_error(e):
guard.register_network_failure(e, destination)
if guard.should_block_request(destination):
return False, OFFLINE_FRIENDLY_MESSAGE, None
logger.debug("Network unavailable during memory download: %s", e)
return False, str(e), None
logger.error(f"Error downloading to memory from {url}: {e}")
return False, str(e), None
@@ -700,7 +862,7 @@ class Downloader:
self,
url: str,
use_auth: bool = False,
custom_headers: Optional[Dict[str, str]] = None
custom_headers: Optional[Dict[str, str]] = None,
) -> Tuple[bool, Union[Dict, str]]:
"""
Get response headers without downloading the full content
@@ -713,26 +875,44 @@ class Downloader:
Returns:
Tuple[bool, Union[Dict, str]]: (success, headers dict or error message)
"""
guard = await ConnectivityGuard.get_instance()
destination = self._guard_destination(url)
if guard.should_block_request(destination):
return False, OFFLINE_COOLDOWN_ERROR
try:
session = await self.session
# Debug log for proxy mode at request time
if self.proxy_url:
logger.debug(f"[get_response_headers] Using app-level proxy: {self.proxy_url}")
logger.debug(
f"[get_response_headers] Using app-level proxy: {self.proxy_url}"
)
else:
logger.debug("[get_response_headers] Using system-level proxy (trust_env) if configured.")
logger.debug(
"[get_response_headers] Using system-level proxy (trust_env) if configured."
)
# Prepare headers
headers = self._get_auth_headers(use_auth)
if custom_headers:
headers.update(custom_headers)
async with session.head(url, headers=headers, proxy=self.proxy_url) as response:
async with session.head(
url, headers=headers, proxy=self.proxy_url
) as response:
if response.status == 200:
guard.register_success(destination)
return True, dict(response.headers)
else:
return False, f"Head request failed with status {response.status}"
except Exception as e:
if guard.is_network_unreachable_error(e):
guard.register_network_failure(e, destination)
if guard.should_block_request(destination):
return False, OFFLINE_COOLDOWN_ERROR
logger.debug("Network unavailable during header probe: %s", e)
return False, str(e)
logger.error(f"Error getting headers from {url}: {e}")
return False, str(e)
@@ -742,7 +922,7 @@ class Downloader:
url: str,
use_auth: bool = False,
custom_headers: Optional[Dict[str, str]] = None,
**kwargs
**kwargs,
) -> Tuple[bool, Union[Dict, str]]:
"""
Make a generic HTTP request and return JSON response
@@ -757,13 +937,20 @@ class Downloader:
Returns:
Tuple[bool, Union[Dict, str]]: (success, response data or error message)
"""
guard = await ConnectivityGuard.get_instance()
destination = self._guard_destination(url)
if guard.should_block_request(destination):
return False, OFFLINE_COOLDOWN_ERROR
try:
session = await self.session
# Debug log for proxy mode at request time
if self.proxy_url:
logger.debug(f"[make_request] Using app-level proxy: {self.proxy_url}")
else:
logger.debug("[make_request] Using system-level proxy (trust_env) if configured.")
logger.debug(
"[make_request] Using system-level proxy (trust_env) if configured."
)
# Prepare headers
headers = self._get_auth_headers(use_auth)
@@ -771,11 +958,14 @@ class Downloader:
headers.update(custom_headers)
# Add proxy to kwargs if not already present
if 'proxy' not in kwargs:
kwargs['proxy'] = self.proxy_url
if "proxy" not in kwargs:
kwargs["proxy"] = self.proxy_url
async with session.request(method, url, headers=headers, **kwargs) as response:
async with session.request(
method, url, headers=headers, **kwargs
) as response:
if response.status == 200:
guard.register_success(destination)
# Try to parse as JSON, fall back to text
try:
data = await response.json()
@@ -806,6 +996,12 @@ class Downloader:
return False, f"Request failed with status {response.status}"
except Exception as e:
if guard.is_network_unreachable_error(e):
guard.register_network_failure(e, destination)
if guard.should_block_request(destination):
return False, OFFLINE_COOLDOWN_ERROR
logger.debug("Network unavailable for %s %s: %s", method, url, e)
return False, str(e)
logger.error(f"Error making {method} request to {url}: {e}")
return False, str(e)
@@ -856,6 +1052,14 @@ class Downloader:
delta = retry_datetime - datetime.now(tz=retry_datetime.tzinfo)
return max(0.0, delta.total_seconds())
@staticmethod
def _guard_destination(url: str) -> str:
"""Build per-destination connectivity guard scope from request URL."""
parsed_url = urlparse(url)
if parsed_url.hostname:
return parsed_url.hostname.lower()
return "unknown"
# Global instance accessor
async def get_downloader() -> Downloader:

View File

@@ -42,6 +42,7 @@ class EmbeddingService(BaseModelService):
"notes": embedding_data.get("notes", ""),
"sub_type": sub_type,
"favorite": embedding_data.get("favorite", False),
"exclude": bool(embedding_data.get("exclude", False)),
"update_available": bool(embedding_data.get("update_available", False)),
"skip_metadata_refresh": bool(embedding_data.get("skip_metadata_refresh", False)),
"civitai": self.filter_civitai_data(embedding_data.get("civitai", {}), minimal=True)

View File

@@ -1,5 +1,6 @@
import os
import logging
import json
import os
from typing import Dict, List, Optional
from .base_model_service import BaseModelService
@@ -47,8 +48,11 @@ class LoraService(BaseModelService):
"usage_tips": lora_data.get("usage_tips", ""),
"notes": lora_data.get("notes", ""),
"favorite": lora_data.get("favorite", False),
"exclude": bool(lora_data.get("exclude", False)),
"update_available": bool(lora_data.get("update_available", False)),
"skip_metadata_refresh": bool(lora_data.get("skip_metadata_refresh", False)),
"skip_metadata_refresh": bool(
lora_data.get("skip_metadata_refresh", False)
),
"sub_type": sub_type,
"civitai": self.filter_civitai_data(
lora_data.get("civitai", {}), minimal=True
@@ -62,6 +66,68 @@ class LoraService(BaseModelService):
if first_letter:
data = self._filter_by_first_letter(data, first_letter)
# Handle name pattern filters
name_pattern_include = kwargs.get("name_pattern_include", [])
name_pattern_exclude = kwargs.get("name_pattern_exclude", [])
name_pattern_use_regex = kwargs.get("name_pattern_use_regex", False)
if name_pattern_include or name_pattern_exclude:
import re
def matches_pattern(name, pattern, use_regex):
"""Check if name matches pattern (regex or substring)"""
if not name:
return False
if use_regex:
try:
return bool(re.search(pattern, name, re.IGNORECASE))
except re.error:
# Invalid regex, fall back to substring match
return pattern.lower() in name.lower()
else:
return pattern.lower() in name.lower()
def matches_any_pattern(name, patterns, use_regex):
"""Check if name matches any of the patterns"""
if not patterns:
return True
return any(matches_pattern(name, p, use_regex) for p in patterns)
filtered = []
for lora in data:
model_name = lora.get("model_name", "")
file_name = lora.get("file_name", "")
names_to_check = [n for n in [model_name, file_name] if n]
# Check exclude patterns first
excluded = False
if name_pattern_exclude:
for name in names_to_check:
if matches_any_pattern(
name, name_pattern_exclude, name_pattern_use_regex
):
excluded = True
break
if excluded:
continue
# Check include patterns
if name_pattern_include:
included = False
for name in names_to_check:
if matches_any_pattern(
name, name_pattern_include, name_pattern_use_regex
):
included = True
break
if not included:
continue
filtered.append(lora)
data = filtered
return data
def _filter_by_first_letter(self, data: List[Dict], letter: str) -> List[Dict]:
@@ -214,6 +280,42 @@ class LoraService(BaseModelService):
return None
@staticmethod
def get_recommended_strength_from_lora_data(lora_data: Dict) -> Optional[float]:
"""Parse usage_tips JSON and extract recommended model strength."""
try:
usage_tips = lora_data.get("usage_tips", "")
if not usage_tips:
return None
tips_data = json.loads(usage_tips)
return tips_data.get("strength")
except (json.JSONDecodeError, TypeError, AttributeError):
return None
@staticmethod
def get_recommended_clip_strength_from_lora_data(
lora_data: Dict,
) -> Optional[float]:
"""Parse usage_tips JSON and extract recommended clip strength."""
try:
usage_tips = lora_data.get("usage_tips", "")
if not usage_tips:
return None
tips_data = json.loads(usage_tips)
return tips_data.get("clipStrength")
except (json.JSONDecodeError, TypeError, AttributeError):
return None
async def get_lora_metadata_by_filename(self, filename: str) -> Optional[Dict]:
"""Return cached raw metadata for a LoRA matching the given filename."""
cache = await self.scanner.get_cached_data(force_refresh=False)
for lora in cache.raw_data if cache else []:
if lora.get("file_name") == filename:
return lora
return None
def find_duplicate_hashes(self) -> Dict:
"""Find LoRAs with duplicate SHA256 hashes"""
return self.scanner._hash_index.get_duplicate_hashes()
@@ -264,34 +366,10 @@ class LoraService(BaseModelService):
List of LoRA dicts with randomized strengths
"""
import random
import json
# Use a local Random instance to avoid affecting global random state
# This ensures each execution with a different seed produces different results
rng = random.Random(seed)
def get_recommended_strength(lora_data: Dict) -> Optional[float]:
"""Parse usage_tips JSON and extract recommended strength"""
try:
usage_tips = lora_data.get("usage_tips", "")
if not usage_tips:
return None
tips_data = json.loads(usage_tips)
return tips_data.get("strength")
except (json.JSONDecodeError, TypeError, AttributeError):
return None
def get_recommended_clip_strength(lora_data: Dict) -> Optional[float]:
"""Parse usage_tips JSON and extract recommended clip strength"""
try:
usage_tips = lora_data.get("usage_tips", "")
if not usage_tips:
return None
tips_data = json.loads(usage_tips)
return tips_data.get("clipStrength")
except (json.JSONDecodeError, TypeError, AttributeError):
return None
if locked_loras is None:
locked_loras = []
@@ -339,7 +417,9 @@ class LoraService(BaseModelService):
result_loras = []
for lora in selected:
if use_recommended_strength:
recommended_strength = get_recommended_strength(lora)
recommended_strength = self.get_recommended_strength_from_lora_data(
lora
)
if recommended_strength is not None:
scale = rng.uniform(
recommended_strength_scale_min, recommended_strength_scale_max
@@ -357,7 +437,9 @@ class LoraService(BaseModelService):
if use_same_clip_strength:
clip_str = model_str
elif use_recommended_strength:
recommended_clip_strength = get_recommended_clip_strength(lora)
recommended_clip_strength = (
self.get_recommended_clip_strength_from_lora_data(lora)
)
if recommended_clip_strength is not None:
scale = rng.uniform(
recommended_strength_scale_min, recommended_strength_scale_max
@@ -368,9 +450,7 @@ class LoraService(BaseModelService):
rng.uniform(clip_strength_min, clip_strength_max), 2
)
else:
clip_str = round(
rng.uniform(clip_strength_min, clip_strength_max), 2
)
clip_str = round(rng.uniform(clip_strength_min, clip_strength_max), 2)
result_loras.append(
{
@@ -485,12 +565,69 @@ class LoraService(BaseModelService):
if bool(lora.get("license_flags", 127) & (1 << 1))
]
# Apply name pattern filters
name_patterns = filter_section.get("namePatterns", {})
include_patterns = name_patterns.get("include", [])
exclude_patterns = name_patterns.get("exclude", [])
use_regex = name_patterns.get("useRegex", False)
if include_patterns or exclude_patterns:
import re
def matches_pattern(name, pattern, use_regex):
"""Check if name matches pattern (regex or substring)"""
if not name:
return False
if use_regex:
try:
return bool(re.search(pattern, name, re.IGNORECASE))
except re.error:
# Invalid regex, fall back to substring match
return pattern.lower() in name.lower()
else:
return pattern.lower() in name.lower()
def matches_any_pattern(name, patterns, use_regex):
"""Check if name matches any of the patterns"""
if not patterns:
return True
return any(matches_pattern(name, p, use_regex) for p in patterns)
filtered = []
for lora in available_loras:
model_name = lora.get("model_name", "")
file_name = lora.get("file_name", "")
names_to_check = [n for n in [model_name, file_name] if n]
# Check exclude patterns first
excluded = False
if exclude_patterns:
for name in names_to_check:
if matches_any_pattern(name, exclude_patterns, use_regex):
excluded = True
break
if excluded:
continue
# Check include patterns
if include_patterns:
included = False
for name in names_to_check:
if matches_any_pattern(name, include_patterns, use_regex):
included = True
break
if not included:
continue
filtered.append(lora)
available_loras = filtered
return available_loras
async def get_cycler_list(
self,
pool_config: Optional[Dict] = None,
sort_by: str = "filename"
self, pool_config: Optional[Dict] = None, sort_by: str = "filename"
) -> List[Dict]:
"""
Get filtered and sorted LoRA list for cycling.
@@ -516,12 +653,18 @@ class LoraService(BaseModelService):
if sort_by == "model_name":
available_loras = sorted(
available_loras,
key=lambda x: (x.get("model_name") or x.get("file_name", "")).lower()
key=lambda x: (
(x.get("model_name") or x.get("file_name", "")).lower(),
x.get("file_path", "").lower(),
),
)
else: # Default to filename
available_loras = sorted(
available_loras,
key=lambda x: x.get("file_name", "").lower()
key=lambda x: (
x.get("file_name", "").lower(),
x.get("file_path", "").lower(),
),
)
# Return minimal data needed for cycling

View File

@@ -122,11 +122,25 @@ async def get_metadata_provider(provider_name: str = None):
provider_manager = await ModelMetadataProviderManager.get_instance()
provider = (
provider_manager._get_provider(provider_name)
if provider_name
else provider_manager._get_provider()
)
try:
provider = (
provider_manager._get_provider(provider_name)
if provider_name
else provider_manager._get_provider()
)
except ValueError as e:
# Provider not initialized, attempt to initialize
if "No default provider set" in str(e) or "not registered" in str(e):
logger.warning(f"Metadata provider not initialized ({e}), initializing now...")
await initialize_metadata_providers()
provider_manager = await ModelMetadataProviderManager.get_instance()
provider = (
provider_manager._get_provider(provider_name)
if provider_name
else provider_manager._get_provider()
)
else:
raise
return _wrap_provider_with_rate_limit(provider_name, provider)

View File

@@ -11,6 +11,7 @@ from typing import Any, Awaitable, Callable, Dict, Iterable, Optional
from ..services.settings_manager import SettingsManager
from ..utils.civitai_utils import resolve_license_payload
from ..utils.model_utils import determine_base_model
from .connectivity_guard import OFFLINE_FRIENDLY_MESSAGE, is_expected_offline_error
from .errors import RateLimitError
logger = logging.getLogger(__name__)
@@ -274,11 +275,18 @@ class MetadataSyncService:
else "No provider returned metadata"
)
resolved_error = last_error or default_error
if is_expected_offline_error(resolved_error):
resolved_error = OFFLINE_FRIENDLY_MESSAGE
error_msg = (
f"Error fetching metadata: {last_error or default_error} "
f"Error fetching metadata: {resolved_error} "
f"(model_name={model_data.get('model_name', '')})"
)
logger.error(error_msg)
if is_expected_offline_error(resolved_error):
logger.info(error_msg)
else:
logger.error(error_msg)
return False, error_msg
model_data["from_civitai"] = True
@@ -347,6 +355,9 @@ class MetadataSyncService:
return False, error_msg
except Exception as exc: # pragma: no cover - error path
error_msg = f"Error fetching metadata: {exc}"
if is_expected_offline_error(str(exc)):
logger.info(OFFLINE_FRIENDLY_MESSAGE)
return False, OFFLINE_FRIENDLY_MESSAGE
logger.error(error_msg, exc_info=True)
return False, error_msg

View File

@@ -221,33 +221,45 @@ class ModelCache:
start_time = time.perf_counter()
reverse = (order == 'desc')
if sort_key == 'name':
# Natural sort by configured display name, case-insensitive
# Natural sort by configured display name, case-insensitive, with file_path as tie-breaker
result = natsorted(
data,
key=lambda x: self._get_display_name(x).lower(),
key=lambda x: (
self._get_display_name(x).lower(),
x.get('file_path', '').lower()
),
reverse=reverse
)
elif sort_key == 'date':
# Sort by modified timestamp (use .get() with default to handle missing fields)
# Sort by modified timestamp, fallback to name and path for stability
result = sorted(
data,
key=lambda x: x.get('modified', 0.0),
key=lambda x: (
x.get('modified', 0.0),
self._get_display_name(x).lower(),
x.get('file_path', '').lower()
),
reverse=reverse
)
elif sort_key == 'size':
# Sort by file size (use .get() with default to handle missing fields)
# Sort by file size, fallback to name and path for stability
result = sorted(
data,
key=lambda x: x.get('size', 0),
key=lambda x: (
x.get('size', 0),
self._get_display_name(x).lower(),
x.get('file_path', '').lower()
),
reverse=reverse
)
elif sort_key == 'usage':
# Sort by usage count, fallback to 0, then name for stability
# Sort by usage count, fallback to 0, then name and path for stability
return sorted(
data,
key=lambda x: (
x.get('usage_count', 0),
self._get_display_name(x).lower()
self._get_display_name(x).lower(),
x.get('file_path', '').lower()
),
reverse=reverse
)

View File

@@ -79,6 +79,12 @@ class ModelHashIndex:
hash_val = h
break
if hash_val is None:
for h, paths in self._duplicate_hashes.items():
if file_path in paths:
hash_val = h
break
# If we didn't find a hash, nothing to do
if not hash_val:
return

View File

@@ -8,6 +8,7 @@ from typing import Any, Awaitable, Callable, Dict, Iterable, List, Mapping, Opti
from ..services.service_registry import ServiceRegistry
from ..utils.constants import PREVIEW_EXTENSIONS
from ..utils.metadata_manager import MetadataManager
logger = logging.getLogger(__name__)
@@ -207,11 +208,56 @@ class ModelLifecycleService:
excluded = getattr(self._scanner, "_excluded_models", None)
if isinstance(excluded, list):
excluded.append(file_path)
if file_path not in excluded:
excluded.append(file_path)
persist_current_cache = getattr(self._scanner, "_persist_current_cache", None)
if callable(persist_current_cache):
await persist_current_cache()
message = f"Model {os.path.basename(file_path)} excluded"
return {"success": True, "message": message}
async def unexclude_model(self, file_path: str) -> Dict[str, object]:
"""Restore a previously excluded model to the active cache."""
if not file_path:
raise ValueError("Model path is required")
if not os.path.exists(file_path):
raise ValueError("Model file does not exist")
metadata_path = os.path.splitext(file_path)[0] + ".metadata.json"
metadata_payload = await self._metadata_loader(metadata_path)
metadata_payload["exclude"] = False
await self._metadata_manager.save_metadata(file_path, metadata_payload)
metadata, should_skip = await MetadataManager.load_metadata(
file_path,
self._scanner.model_class,
)
if should_skip:
metadata = None
if metadata is None:
metadata = metadata_payload
excluded = getattr(self._scanner, "_excluded_models", None)
if isinstance(excluded, list):
self._scanner._excluded_models = [
path for path in excluded if path != file_path
]
await self._scanner.update_single_model_cache(
file_path,
file_path,
metadata,
recalculate_type=True,
)
message = f"Model {os.path.basename(file_path)} restored"
return {"success": True, "message": message}
async def bulk_delete_models(self, file_paths: Iterable[str]) -> Dict[str, object]:
"""Delete a collection of models via the scanner bulk operation."""

View File

@@ -14,7 +14,6 @@ from ..utils.metadata_manager import MetadataManager
from ..utils.civitai_utils import resolve_license_info
from .model_cache import ModelCache
from .model_hash_index import ModelHashIndex
from ..utils.constants import PREVIEW_EXTENSIONS
from .model_lifecycle_service import delete_model_artifacts
from .service_registry import ServiceRegistry
from .websocket_manager import ws_manager
@@ -412,6 +411,7 @@ class ModelScanner:
if scan_result:
await self._apply_scan_result(scan_result)
await self._save_persistent_cache(scan_result)
await self._sync_download_history(scan_result.raw_data, source='scan')
# Send final progress update
await ws_manager.broadcast_init_progress({
@@ -517,6 +517,7 @@ class ModelScanner:
)
await self._apply_scan_result(scan_result)
await self._sync_download_history(adjusted_raw_data, source='scan')
await ws_manager.broadcast_init_progress({
'stage': 'loading_cache',
@@ -577,6 +578,7 @@ class ModelScanner:
excluded_models=list(self._excluded_models)
)
await self._save_persistent_cache(snapshot)
await self._sync_download_history(snapshot.raw_data, source='scan')
def _count_model_files(self) -> int:
"""Count all model files with supported extensions in all roots
@@ -705,6 +707,7 @@ class ModelScanner:
scan_result = await self._gather_model_data()
await self._apply_scan_result(scan_result)
await self._save_persistent_cache(scan_result)
await self._sync_download_history(scan_result.raw_data, source='scan')
logger.info(
f"{self.model_type.capitalize()} Scanner: Cache initialization completed in {time.time() - start_time:.2f} seconds, "
@@ -733,19 +736,24 @@ class ModelScanner:
# Get current cached file paths
cached_paths = {item['file_path'] for item in self._cache.raw_data}
path_to_item = {item['file_path']: item for item in self._cache.raw_data}
cached_real_paths = {}
for cached_path in cached_paths:
try:
cached_real_paths.setdefault(os.path.realpath(cached_path), cached_path)
except Exception:
continue
# Track found files and new files
found_paths = set()
new_files = []
visited_real_paths = set()
discovered_real_files = set()
# Scan all model roots
for root_path in self.get_model_roots():
if not os.path.exists(root_path):
continue
# Track visited real paths to avoid symlink loops
visited_real_paths = set()
# Recursively scan directory
for root, _, files in os.walk(root_path, followlinks=True):
real_root = os.path.realpath(root)
@@ -758,12 +766,18 @@ class ModelScanner:
if ext in self.file_extensions:
# Construct paths exactly as they would be in cache
file_path = os.path.join(root, file).replace(os.sep, '/')
real_file_path = os.path.realpath(os.path.join(root, file))
# Check if this file is already in cache
if file_path in cached_paths:
found_paths.add(file_path)
continue
cached_real_match = cached_real_paths.get(real_file_path)
if cached_real_match:
found_paths.add(cached_real_match)
continue
if file_path in self._excluded_models:
continue
@@ -779,6 +793,10 @@ class ModelScanner:
if matched:
continue
if real_file_path in discovered_real_files:
continue
discovered_real_files.add(real_file_path)
# This is a new file to process
new_files.append(file_path)
@@ -1054,14 +1072,6 @@ class ModelScanner:
excluded_models.append(model_data['file_path'])
return None
# Check for duplicate filename before adding to hash index
# filename = os.path.splitext(os.path.basename(file_path))[0]
# existing_hash = hash_index.get_hash_by_filename(filename)
# if existing_hash and existing_hash != model_data.get('sha256', '').lower():
# existing_path = hash_index.get_path(existing_hash)
# if existing_path and existing_path != file_path:
# logger.warning(f"Duplicate filename detected: '{filename}' - files: '{existing_path}' and '{file_path}'")
return model_data
async def _apply_scan_result(self, scan_result: CacheBuildResult) -> None:
@@ -1087,6 +1097,74 @@ class ModelScanner:
await self._cache.resort()
self._log_duplicate_filename_summary()
def _log_duplicate_filename_summary(self) -> None:
"""Log a batched summary of duplicate filename conflicts once per scan."""
if self._hash_index is None:
return
duplicates = self._hash_index.get_duplicate_filenames()
if not duplicates:
return
total_files = sum(len(paths) for paths in duplicates.values())
conflict_count = len(duplicates)
model_type_label = self.model_type or "model"
logger.warning(
"Duplicate filename conflict detected: %d %s filename(s) "
"are shared by %d files total, causing ambiguity in %s resolution. "
"Open the Doctor panel to resolve one-click.",
conflict_count,
model_type_label,
total_files,
model_type_label.capitalize(),
)
async def _sync_download_history(
self,
raw_data: List[Mapping[str, Any]],
*,
source: str,
) -> None:
records: List[Dict[str, Any]] = []
for item in raw_data or []:
if not isinstance(item, Mapping):
continue
civitai = item.get('civitai')
if not isinstance(civitai, Mapping):
continue
version_id = civitai.get('id')
if version_id in (None, ''):
continue
records.append(
{
'version_id': version_id,
'model_id': civitai.get('modelId'),
'file_path': item.get('file_path'),
}
)
if not records:
return
try:
history_service = await ServiceRegistry.get_downloaded_version_history_service()
await history_service.mark_downloaded_bulk(
self.model_type,
records,
source=source,
)
except Exception as exc:
logger.debug(
"%s Scanner: Failed to sync download history: %s",
self.model_type.capitalize(),
exc,
)
async def _gather_model_data(
self,
*,
@@ -1100,6 +1178,8 @@ class ModelScanner:
tags_count: Dict[str, int] = {}
excluded_models: List[str] = []
processed_files = 0
processed_real_files: Set[str] = set()
visited_real_dirs: Set[str] = set()
async def handle_progress() -> None:
if progress_callback is None:
@@ -1116,9 +1196,10 @@ class ModelScanner:
try:
real_path = os.path.realpath(current_path)
if real_path in visited_paths:
if real_path in visited_paths or real_path in visited_real_dirs:
return
visited_paths.add(real_path)
visited_real_dirs.add(real_path)
with os.scandir(current_path) as iterator:
entries = list(iterator)
@@ -1131,6 +1212,11 @@ class ModelScanner:
continue
file_path = entry.path.replace(os.sep, "/")
real_file_path = os.path.realpath(entry.path)
if real_file_path in processed_real_files:
continue
processed_real_files.add(real_file_path)
result = await self._process_model_file(
file_path,
root_path,
@@ -1443,12 +1529,11 @@ class ModelScanner:
if not file_path:
return None
base_name = os.path.splitext(file_path)[0]
for ext in PREVIEW_EXTENSIONS:
preview_path = f"{base_name}{ext}"
if os.path.exists(preview_path):
return config.get_preview_static_url(preview_path)
dir_path = os.path.dirname(file_path)
base_name = os.path.splitext(os.path.basename(file_path))[0]
preview_path = find_preview_file(base_name, dir_path)
if preview_path:
return config.get_preview_static_url(preview_path)
return None
@@ -1467,7 +1552,7 @@ class ModelScanner:
return sorted_tags[:limit]
async def get_base_models(self, limit: int = 20) -> List[Dict[str, any]]:
"""Get base models sorted by frequency"""
"""Get base models sorted by count. If limit is 0, return all."""
cache = await self.get_cached_data()
base_model_counts = {}
@@ -1479,6 +1564,8 @@ class ModelScanner:
sorted_models = [{'name': model, 'count': count} for model, count in base_model_counts.items()]
sorted_models.sort(key=lambda x: x['count'], reverse=True)
if limit == 0:
return sorted_models
return sorted_models[:limit]
async def get_model_info_by_name(self, name):

View File

@@ -12,8 +12,9 @@ from typing import Any, Dict, Iterable, List, Mapping, Optional, Sequence
from .errors import RateLimitError, ResourceNotFoundError
from .settings_manager import get_settings_manager
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
from ..utils.civitai_utils import rewrite_preview_url
from ..utils.preview_selection import select_preview_media
from ..utils.preview_selection import resolve_mature_threshold, select_preview_media
logger = logging.getLogger(__name__)
@@ -68,6 +69,7 @@ class ModelVersionRecord:
early_access_ends_at: Optional[str] = None
sort_index: int = 0
is_early_access: bool = False
usage_control: Optional[str] = None # "Download", "Generation", "InternalGeneration"
@dataclass
@@ -100,11 +102,14 @@ class ModelUpdateRecord:
return [version.version_id for version in self.versions if version.is_in_library]
def has_update(self, hide_early_access: bool = False) -> bool:
def has_update(
self, hide_early_access: bool = False, hide_non_downloadable: bool = True
) -> bool:
"""Return True when a non-ignored remote version newer than the newest local copy is available.
Args:
hide_early_access: If True, exclude early access versions from update check.
hide_non_downloadable: If True, exclude versions that don't allow downloads.
"""
if self.should_ignore_model:
@@ -120,6 +125,7 @@ class ModelUpdateRecord:
not version.is_in_library
and not version.should_ignore
and not (hide_early_access and ModelUpdateRecord._is_early_access_active(version))
and not (hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version))
for version in self.versions
)
@@ -128,6 +134,8 @@ class ModelUpdateRecord:
continue
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
continue
if hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version):
continue
if version.version_id > max_in_library:
return True
return False
@@ -154,11 +162,18 @@ class ModelUpdateRecord:
# Phase 1: Basic EA flag from bulk API
return version.is_early_access
@staticmethod
def _is_downloadable(version: ModelVersionRecord) -> bool:
if version.usage_control is None:
return True
return version.usage_control == "Download"
def has_update_for_base(
self,
local_version_id: Optional[int],
local_base_model: Optional[str],
hide_early_access: bool = False,
hide_non_downloadable: bool = True,
) -> bool:
"""Return True when a newer remote version with the same base model exists.
@@ -166,6 +181,7 @@ class ModelUpdateRecord:
local_version_id: The current local version id.
local_base_model: The base model to filter by.
hide_early_access: If True, exclude early access versions from update check.
hide_non_downloadable: If True, exclude versions that don't allow downloads.
"""
if self.should_ignore_model:
@@ -196,6 +212,8 @@ class ModelUpdateRecord:
continue
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
continue
if hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version):
continue
version_base = _normalize_base_model(version.base_model)
if version_base != normalized_base:
continue
@@ -208,6 +226,8 @@ class ModelUpdateRecord:
class ModelUpdateService:
"""Persist and query remote model version metadata."""
_SQLITE_MAX_VARIABLES = 500
_SCHEMA = """
PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS model_update_status (
@@ -227,6 +247,7 @@ class ModelUpdateService:
preview_url TEXT,
is_in_library INTEGER NOT NULL DEFAULT 0,
should_ignore INTEGER NOT NULL DEFAULT 0,
usage_control TEXT,
PRIMARY KEY (model_id, version_id),
FOREIGN KEY(model_id) REFERENCES model_update_status(model_id) ON DELETE CASCADE
);
@@ -234,12 +255,52 @@ class ModelUpdateService:
ON model_update_versions(model_id);
"""
def __init__(self, db_path: str, *, ttl_seconds: int = 24 * 60 * 60, settings_manager=None) -> None:
self._db_path = db_path
def __init__(
self,
db_path: str | None = None,
*,
ttl_seconds: int = 24 * 60 * 60,
settings_manager=None,
) -> None:
self._settings = settings_manager or get_settings_manager()
self._library_name = self._get_active_library_name()
self._db_path = db_path or self._resolve_default_path(self._library_name)
self._ttl_seconds = ttl_seconds
self._lock = asyncio.Lock()
self._schema_initialized = False
self._settings = settings_manager or get_settings_manager()
self._custom_db_path = db_path is not None
self._ensure_directory()
self._initialize_schema()
def _get_active_library_name(self) -> str:
try:
value = self._settings.get_active_library_name()
except Exception:
value = None
return value or "default"
def _resolve_default_path(self, library_name: str) -> str:
env_override = os.environ.get("LORA_MANAGER_MODEL_UPDATE_DB")
return resolve_cache_path_with_migration(
CacheType.MODEL_UPDATE,
library_name=library_name,
env_override=env_override,
)
def on_library_changed(self) -> None:
"""Switch to the database for the active library."""
if self._custom_db_path:
return
library_name = self._get_active_library_name()
new_path = self._resolve_default_path(library_name)
if new_path == self._db_path:
return
self._library_name = library_name
self._db_path = new_path
self._schema_initialized = False
self._ensure_directory()
self._initialize_schema()
@@ -262,11 +323,114 @@ class ModelUpdateService:
conn.execute("PRAGMA foreign_keys = ON")
conn.executescript(self._SCHEMA)
self._apply_migrations(conn)
self._migrate_from_legacy_snapshot(conn)
self._schema_initialized = True
except Exception as exc: # pragma: no cover - defensive guard
logger.error("Failed to initialize update schema: %s", exc, exc_info=True)
raise
def _migrate_from_legacy_snapshot(self, conn: sqlite3.Connection) -> None:
"""Copy update tracking data out of the legacy model snapshot database."""
if self._custom_db_path:
return
try:
from .persistent_model_cache import get_persistent_cache
legacy_path = get_persistent_cache(self._library_name).get_database_path()
except Exception:
return
if not legacy_path or os.path.abspath(legacy_path) == os.path.abspath(self._db_path):
return
if not os.path.exists(legacy_path):
return
try:
existing_row = conn.execute(
"SELECT 1 FROM model_update_status LIMIT 1"
).fetchone()
if existing_row:
return
except Exception:
return
try:
with sqlite3.connect(legacy_path, check_same_thread=False) as legacy_conn:
legacy_conn.row_factory = sqlite3.Row
status_rows = legacy_conn.execute(
"""
SELECT model_id, model_type, last_checked_at, should_ignore_model
FROM model_update_status
"""
).fetchall()
if not status_rows:
return
version_rows = legacy_conn.execute(
"""
SELECT model_id, version_id, sort_index, name, base_model, released_at,
size_bytes, preview_url, is_in_library, should_ignore,
early_access_ends_at, is_early_access
FROM model_update_versions
ORDER BY model_id ASC, sort_index ASC, version_id ASC
"""
).fetchall()
conn.execute("BEGIN")
conn.executemany(
"""
INSERT OR REPLACE INTO model_update_status (
model_id, model_type, last_checked_at, should_ignore_model
) VALUES (?, ?, ?, ?)
""",
[
(
int(row["model_id"]),
row["model_type"],
row["last_checked_at"],
int(row["should_ignore_model"] or 0),
)
for row in status_rows
],
)
conn.executemany(
"""
INSERT OR REPLACE INTO model_update_versions (
model_id, version_id, sort_index, name, base_model, released_at,
size_bytes, preview_url, is_in_library, should_ignore,
early_access_ends_at, is_early_access
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
[
(
int(row["model_id"]),
int(row["version_id"]),
int(row["sort_index"] or 0),
row["name"],
row["base_model"],
row["released_at"],
row["size_bytes"],
row["preview_url"],
int(row["is_in_library"] or 0),
int(row["should_ignore"] or 0),
row["early_access_ends_at"],
int(row["is_early_access"] or 0),
)
for row in version_rows
],
)
conn.commit()
logger.info(
"Migrated model update tracking data from legacy snapshot DB for %s",
self._library_name,
)
except sqlite3.OperationalError as exc:
logger.debug("Legacy model update migration skipped: %s", exc)
except Exception as exc: # pragma: no cover - defensive guard
logger.warning("Failed to migrate model update data: %s", exc, exc_info=True)
def _apply_migrations(self, conn: sqlite3.Connection) -> None:
"""Ensure legacy databases match the current schema without dropping data."""
@@ -319,6 +483,10 @@ class ModelUpdateService:
"ALTER TABLE model_update_versions "
"ADD COLUMN is_early_access INTEGER NOT NULL DEFAULT 0"
),
"usage_control": (
"ALTER TABLE model_update_versions "
"ADD COLUMN usage_control TEXT"
),
}
for column, statement in migrations.items():
@@ -1191,6 +1359,7 @@ class ModelUpdateService:
# Check availability field from bulk API for basic EA detection
availability = _normalize_string(entry.get("availability"))
is_early_access = availability == "EarlyAccess"
usage_control = _normalize_string(entry.get("usageControl"))
return ModelVersionRecord(
version_id=version_id,
@@ -1204,6 +1373,7 @@ class ModelUpdateService:
early_access_ends_at=early_access_ends_at,
sort_index=index,
is_early_access=is_early_access,
usage_control=usage_control,
)
def _extract_size_bytes(self, files) -> Optional[int]:
@@ -1252,14 +1422,23 @@ class ModelUpdateService:
return None
blur_mature_content = True
mature_threshold = resolve_mature_threshold({"mature_blur_level": "R"})
settings = getattr(self, "_settings", None)
if settings is not None and hasattr(settings, "get"):
try:
blur_mature_content = bool(settings.get("blur_mature_content", True))
mature_threshold = resolve_mature_threshold(
{"mature_blur_level": settings.get("mature_blur_level", "R")}
)
except Exception: # pragma: no cover - defensive guard
blur_mature_content = True
mature_threshold = resolve_mature_threshold({"mature_blur_level": "R"})
selected, _ = select_preview_media(candidates, blur_mature_content=blur_mature_content)
selected, _ = select_preview_media(
candidates,
blur_mature_content=blur_mature_content,
mature_threshold=mature_threshold,
)
if not selected:
return None
@@ -1286,33 +1465,41 @@ class ModelUpdateService:
if not model_ids:
return {}
params = tuple(model_ids)
placeholders = ",".join("?" for _ in params)
ids = list(model_ids)
status_rows: list = []
version_rows: list = []
with self._connect() as conn:
status_rows = conn.execute(
f"""
SELECT model_id, model_type, last_checked_at, should_ignore_model
FROM model_update_status
WHERE model_id IN ({placeholders})
""",
params,
).fetchall()
for start in range(0, len(ids), self._SQLITE_MAX_VARIABLES):
chunk = tuple(ids[start : start + self._SQLITE_MAX_VARIABLES])
placeholders = ",".join("?" for _ in chunk)
chunk_status = conn.execute(
f"""
SELECT model_id, model_type, last_checked_at, should_ignore_model
FROM model_update_status
WHERE model_id IN ({placeholders})
""",
chunk,
).fetchall()
status_rows.extend(chunk_status)
chunk_versions = conn.execute(
f"""
SELECT model_id, version_id, sort_index, name, base_model, released_at,
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
is_early_access, usage_control
FROM model_update_versions
WHERE model_id IN ({placeholders})
ORDER BY model_id ASC, sort_index ASC, version_id ASC
""",
chunk,
).fetchall()
version_rows.extend(chunk_versions)
if not status_rows:
return {}
version_rows = conn.execute(
f"""
SELECT model_id, version_id, sort_index, name, base_model, released_at,
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
is_early_access
FROM model_update_versions
WHERE model_id IN ({placeholders})
ORDER BY model_id ASC, sort_index ASC, version_id ASC
""",
params,
).fetchall()
versions_by_model: Dict[int, List[ModelVersionRecord]] = {}
for row in version_rows:
model_id = int(row["model_id"])
@@ -1329,6 +1516,7 @@ class ModelUpdateService:
early_access_ends_at=row["early_access_ends_at"],
sort_index=_normalize_int(row["sort_index"]) or 0,
is_early_access=bool(row["is_early_access"]),
usage_control=row["usage_control"],
)
)
@@ -1385,8 +1573,8 @@ class ModelUpdateService:
INSERT INTO model_update_versions (
version_id, model_id, sort_index, name, base_model, released_at,
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
is_early_access
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
is_early_access, usage_control
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
version.version_id,
@@ -1401,6 +1589,7 @@ class ModelUpdateService:
1 if version.should_ignore else 0,
version.early_access_ends_at,
1 if version.is_early_access else 0,
version.usage_control,
),
)
conn.commit()

View File

@@ -56,6 +56,7 @@ class PersistentModelCache:
"exclude",
"db_checked",
"last_checked_at",
"hash_status",
)
_MODEL_UPDATE_COLUMNS: Tuple[str, ...] = _MODEL_COLUMNS[2:]
_instances: Dict[str, "PersistentModelCache"] = {}
@@ -186,6 +187,7 @@ class PersistentModelCache:
"civitai_deleted": bool(row["civitai_deleted"]),
"skip_metadata_refresh": bool(row["skip_metadata_refresh"]),
"license_flags": int(license_value),
"hash_status": row["hash_status"] or "completed",
}
raw_data.append(item)
@@ -449,6 +451,7 @@ class PersistentModelCache:
exclude INTEGER,
db_checked INTEGER,
last_checked_at REAL,
hash_status TEXT,
PRIMARY KEY (model_type, file_path)
);
@@ -496,6 +499,7 @@ class PersistentModelCache:
"skip_metadata_refresh": "INTEGER DEFAULT 0",
# Persisting without explicit flags should assume CivitAI's documented defaults (0b111001 == 57).
"license_flags": f"INTEGER DEFAULT {DEFAULT_LICENSE_FLAGS}",
"hash_status": "TEXT DEFAULT 'completed'",
}
for column, definition in required_columns.items():
@@ -570,6 +574,7 @@ class PersistentModelCache:
1 if item.get("exclude") else 0,
1 if item.get("db_checked") else 0,
float(item.get("last_checked_at") or 0.0),
item.get("hash_status", "completed"),
)
def _insert_model_sql(self) -> str:

View File

@@ -9,7 +9,7 @@ from urllib.parse import urlparse
from ..utils.constants import CARD_PREVIEW_WIDTH, PREVIEW_EXTENSIONS
from ..utils.civitai_utils import rewrite_preview_url
from ..utils.preview_selection import select_preview_media
from ..utils.preview_selection import resolve_mature_threshold, select_preview_media
from .settings_manager import get_settings_manager
logger = logging.getLogger(__name__)
@@ -49,9 +49,13 @@ class PreviewAssetService:
blur_mature_content = bool(
settings_manager.get("blur_mature_content", True)
)
mature_threshold = resolve_mature_threshold(
{"mature_blur_level": settings_manager.get("mature_blur_level", "R")}
)
first_preview, nsfw_level = select_preview_media(
images,
blur_mature_content=blur_mature_content,
mature_threshold=mature_threshold,
)
if not first_preview:
@@ -216,4 +220,3 @@ class PreviewAssetService:
if "webm" in content_type:
return ".webm"
return ".mp4"

View File

@@ -4,6 +4,7 @@ from dataclasses import dataclass
from operator import itemgetter
from natsort import natsorted
@dataclass
class RecipeCache:
"""Cache structure for Recipe data"""
@@ -21,11 +22,18 @@ class RecipeCache:
self.folder_tree = self.folder_tree or {}
async def resort(self, name_only: bool = False):
"""Resort all cached data views"""
"""Resort all cached data views in a thread pool to avoid blocking the event loop."""
async with self._lock:
self._resort_locked(name_only=name_only)
loop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
self._resort_locked,
name_only,
)
async def update_recipe_metadata(self, recipe_id: str, metadata: Dict, *, resort: bool = True) -> bool:
async def update_recipe_metadata(
self, recipe_id: str, metadata: Dict, *, resort: bool = True
) -> bool:
"""Update metadata for a specific recipe in all cached data
Args:
@@ -37,7 +45,7 @@ class RecipeCache:
"""
async with self._lock:
for item in self.raw_data:
if str(item.get('id')) == str(recipe_id):
if str(item.get("id")) == str(recipe_id):
item.update(metadata)
if resort:
self._resort_locked()
@@ -52,7 +60,9 @@ class RecipeCache:
if resort:
self._resort_locked()
async def remove_recipe(self, recipe_id: str, *, resort: bool = False) -> Optional[Dict]:
async def remove_recipe(
self, recipe_id: str, *, resort: bool = False
) -> Optional[Dict]:
"""Remove a recipe from the cache by ID.
Args:
@@ -64,14 +74,16 @@ class RecipeCache:
async with self._lock:
for index, recipe in enumerate(self.raw_data):
if str(recipe.get('id')) == str(recipe_id):
if str(recipe.get("id")) == str(recipe_id):
removed = self.raw_data.pop(index)
if resort:
self._resort_locked()
return removed
return None
async def bulk_remove(self, recipe_ids: Iterable[str], *, resort: bool = False) -> List[Dict]:
async def bulk_remove(
self, recipe_ids: Iterable[str], *, resort: bool = False
) -> List[Dict]:
"""Remove multiple recipes from the cache."""
id_set = {str(recipe_id) for recipe_id in recipe_ids}
@@ -79,21 +91,25 @@ class RecipeCache:
return []
async with self._lock:
removed = [item for item in self.raw_data if str(item.get('id')) in id_set]
removed = [item for item in self.raw_data if str(item.get("id")) in id_set]
if not removed:
return []
self.raw_data = [item for item in self.raw_data if str(item.get('id')) not in id_set]
self.raw_data = [
item for item in self.raw_data if str(item.get("id")) not in id_set
]
if resort:
self._resort_locked()
return removed
async def replace_recipe(self, recipe_id: str, new_data: Dict, *, resort: bool = False) -> bool:
async def replace_recipe(
self, recipe_id: str, new_data: Dict, *, resort: bool = False
) -> bool:
"""Replace cached data for a recipe."""
async with self._lock:
for index, recipe in enumerate(self.raw_data):
if str(recipe.get('id')) == str(recipe_id):
if str(recipe.get("id")) == str(recipe_id):
self.raw_data[index] = new_data
if resort:
self._resort_locked()
@@ -105,7 +121,7 @@ class RecipeCache:
async with self._lock:
for recipe in self.raw_data:
if str(recipe.get('id')) == str(recipe_id):
if str(recipe.get("id")) == str(recipe_id):
return dict(recipe)
return None
@@ -115,16 +131,14 @@ class RecipeCache:
async with self._lock:
return [dict(item) for item in self.raw_data]
def _resort_locked(self, *, name_only: bool = False) -> None:
def _resort_locked(self, name_only: bool = False) -> None:
"""Sort cached views. Caller must hold ``_lock``."""
self.sorted_by_name = natsorted(
self.raw_data,
key=lambda x: x.get('title', '').lower()
key=lambda x: (x.get("title", "").lower(), x.get("file_path", "").lower()),
)
if not name_only:
self.sorted_by_date = sorted(
self.raw_data,
key=itemgetter('created_date', 'file_path'),
reverse=True
self.raw_data, key=itemgetter("created_date", "file_path"), reverse=True
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,10 @@
"""Services responsible for recipe metadata analysis."""
from __future__ import annotations
import base64
import io
import os
import re
import tempfile
from dataclasses import dataclass
from typing import Any, Callable, Optional
@@ -13,7 +13,7 @@ import numpy as np
from PIL import Image
from ...utils.utils import calculate_recipe_fingerprint
from ...utils.civitai_utils import rewrite_preview_url
from ...utils.civitai_utils import extract_civitai_image_id, rewrite_preview_url
from .errors import (
RecipeDownloadError,
RecipeNotFoundError,
@@ -69,7 +69,9 @@ class RecipeAnalysisService:
try:
metadata = self._exif_utils.extract_image_metadata(temp_path)
if not metadata:
return AnalysisResult({"error": "No metadata found in this image", "loras": []})
return AnalysisResult(
{"error": "No metadata found in this image", "loras": []}
)
return await self._parse_metadata(
metadata,
@@ -101,11 +103,15 @@ class RecipeAnalysisService:
extension = ".jpg" # Default
try:
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", url)
if civitai_match:
image_info = await civitai_client.get_image_info(civitai_match.group(1))
civitai_image_id = extract_civitai_image_id(url)
if civitai_image_id:
image_info = await civitai_client.get_image_info(
civitai_image_id, source_url=url
)
if not image_info:
raise RecipeDownloadError("Failed to fetch image information from Civitai")
raise RecipeDownloadError(
"Failed to fetch image information from Civitai"
)
image_url = image_info.get("url")
if not image_url:
@@ -114,13 +120,15 @@ class RecipeAnalysisService:
is_video = image_info.get("type") == "video"
# Use optimized preview URLs if possible
rewritten_url, _ = rewrite_preview_url(image_url, media_type=image_info.get("type"))
rewritten_url, _ = rewrite_preview_url(
image_url, media_type=image_info.get("type")
)
if rewritten_url:
image_url = rewritten_url
if is_video:
# Extract extension from URL
url_path = image_url.split('?')[0].split('#')[0]
url_path = image_url.split("?")[0].split("#")[0]
extension = os.path.splitext(url_path)[1].lower() or ".mp4"
else:
extension = ".jpg"
@@ -135,9 +143,23 @@ class RecipeAnalysisService:
and isinstance(metadata["meta"], dict)
):
metadata = metadata["meta"]
# Include modelVersionIds from root level if available
# Civitai API returns modelVersionIds at root level, not in meta
model_version_ids = image_info.get("modelVersionIds")
if model_version_ids and isinstance(metadata, dict):
metadata["modelVersionIds"] = model_version_ids
# Validate that metadata contains meaningful recipe fields
# If not, treat as None to trigger EXIF extraction from downloaded image
if isinstance(metadata, dict) and not self._has_recipe_fields(metadata):
self._logger.debug(
"Civitai API metadata lacks recipe fields, will extract from EXIF"
)
metadata = None
else:
# Basic extension detection for non-Civitai URLs
url_path = url.split('?')[0].split('#')[0]
url_path = url.split("?")[0].split("#")[0]
extension = os.path.splitext(url_path)[1].lower()
if extension in [".mp4", ".webm"]:
is_video = True
@@ -211,7 +233,9 @@ class RecipeAnalysisService:
image_bytes = self._convert_tensor_to_png_bytes(latest_image)
if image_bytes is None:
raise RecipeValidationError("Cannot handle this data shape from metadata registry")
raise RecipeValidationError(
"Cannot handle this data shape from metadata registry"
)
return AnalysisResult(
{
@@ -222,6 +246,22 @@ class RecipeAnalysisService:
# Internal helpers -------------------------------------------------
def _has_recipe_fields(self, metadata: dict[str, Any]) -> bool:
"""Check if metadata contains meaningful recipe-related fields."""
recipe_fields = {
"prompt",
"negative_prompt",
"resources",
"hashes",
"params",
"generationData",
"Workflow",
"prompt_type",
"positive",
"negative",
}
return any(field in metadata for field in recipe_fields)
async def _parse_metadata(
self,
metadata: dict[str, Any],
@@ -234,7 +274,12 @@ class RecipeAnalysisService:
) -> AnalysisResult:
parser = self._recipe_parser_factory.create_parser(metadata)
if parser is None:
payload = {"error": "No parser found for this image", "loras": []}
# Provide more specific error message based on metadata source
if not metadata:
error_msg = "This image does not contain any generation metadata (prompt, models, or parameters)"
else:
error_msg = "No parser found for this image"
payload = {"error": error_msg, "loras": []}
if include_image_base64 and image_path:
payload["image_base64"] = self._encode_file(image_path)
payload["is_video"] = is_video
@@ -257,7 +302,9 @@ class RecipeAnalysisService:
matching_recipes: list[str] = []
if fingerprint:
matching_recipes = await recipe_scanner.find_recipes_by_fingerprint(fingerprint)
matching_recipes = await recipe_scanner.find_recipes_by_fingerprint(
fingerprint
)
result["matching_recipes"] = matching_recipes
return AnalysisResult(result)
@@ -269,7 +316,10 @@ class RecipeAnalysisService:
raise RecipeDownloadError(f"Failed to download image from URL: {result}")
def _metadata_not_found_response(self, path: str) -> AnalysisResult:
payload: dict[str, Any] = {"error": "No metadata found in this image", "loras": []}
payload: dict[str, Any] = {
"error": "No metadata found in this image",
"loras": [],
}
if os.path.exists(path):
payload["image_base64"] = self._encode_file(path)
return AnalysisResult(payload)
@@ -305,7 +355,9 @@ class RecipeAnalysisService:
if hasattr(tensor_image, "shape"):
self._logger.debug(
"Tensor shape: %s, dtype: %s", tensor_image.shape, getattr(tensor_image, "dtype", None)
"Tensor shape: %s, dtype: %s",
tensor_image.shape,
getattr(tensor_image, "dtype", None),
)
import torch # type: ignore[import-not-found]

View File

@@ -12,6 +12,7 @@ from dataclasses import dataclass
from typing import Any, Dict, Iterable, Optional
from ...config import config
from ...recipes.constants import GEN_PARAM_KEYS
from ...utils.utils import calculate_recipe_fingerprint
from .errors import RecipeNotFoundError, RecipeValidationError
@@ -90,23 +91,7 @@ class RecipePersistenceService:
current_time = time.time()
loras_data = [self._normalise_lora_entry(lora) for lora in (metadata.get("loras") or [])]
checkpoint_entry = self._sanitize_checkpoint_entry(self._extract_checkpoint_entry(metadata))
gen_params = metadata.get("gen_params") or {}
if not gen_params and "raw_metadata" in metadata:
raw_metadata = metadata.get("raw_metadata", {})
gen_params = {
"prompt": raw_metadata.get("prompt", ""),
"negative_prompt": raw_metadata.get("negative_prompt", ""),
"steps": raw_metadata.get("steps", ""),
"sampler": raw_metadata.get("sampler", ""),
"cfg_scale": raw_metadata.get("cfg_scale", ""),
"seed": raw_metadata.get("seed", ""),
"size": raw_metadata.get("size", ""),
"clip_skip": raw_metadata.get("clip_skip", ""),
}
# Drop checkpoint duplication from generation parameters to store it only at top level
gen_params.pop("checkpoint", None)
gen_params = self._sanitize_gen_params_for_storage(metadata)
fingerprint = calculate_recipe_fingerprint(loras_data)
recipe_data: Dict[str, Any] = {
@@ -133,6 +118,7 @@ class RecipePersistenceService:
json_filename = f"{recipe_id}.recipe.json"
json_path = os.path.join(recipes_dir, json_filename)
json_path = os.path.normpath(json_path)
with open(json_path, "w", encoding="utf-8") as file_obj:
json.dump(recipe_data, file_obj, indent=4, ensure_ascii=False)
@@ -152,6 +138,30 @@ class RecipePersistenceService:
}
)
@staticmethod
def _sanitize_gen_params_for_storage(metadata: dict[str, Any]) -> dict[str, Any]:
gen_params = metadata.get("gen_params")
if isinstance(gen_params, dict) and gen_params:
source = gen_params
else:
source = metadata.get("raw_metadata")
if not isinstance(source, dict):
return {}
allowed_keys = set(GEN_PARAM_KEYS)
sanitized: dict[str, Any] = {}
for key in allowed_keys:
if key not in source:
continue
value = source.get(key)
if value in (None, ""):
continue
sanitized[key] = value
sanitized.pop("checkpoint", None)
return sanitized
async def delete_recipe(self, *, recipe_scanner, recipe_id: str) -> PersistenceResult:
"""Delete an existing recipe."""
@@ -173,11 +183,23 @@ class RecipePersistenceService:
async def update_recipe(self, *, recipe_scanner, recipe_id: str, updates: dict[str, Any]) -> PersistenceResult:
"""Update persisted metadata for a recipe."""
if not any(key in updates for key in ("title", "tags", "source_path", "preview_nsfw_level", "favorite")):
allowed_fields = (
"title",
"tags",
"source_path",
"preview_nsfw_level",
"favorite",
"gen_params",
)
if not any(key in updates for key in allowed_fields):
raise RecipeValidationError(
"At least one field to update must be provided (title or tags or source_path or preview_nsfw_level or favorite)"
"At least one field to update must be provided (title or tags or source_path or preview_nsfw_level or favorite or gen_params)"
)
if "gen_params" in updates and not isinstance(updates["gen_params"], dict):
raise RecipeValidationError("gen_params must be an object")
success = await recipe_scanner.update_recipe_metadata(recipe_id, updates)
if not success:
raise RecipeNotFoundError("Recipe not found or update failed")
@@ -486,6 +508,10 @@ class RecipePersistenceService:
most_common_base_model = (
max(base_model_counts.items(), key=lambda item: item[1])[0] if base_model_counts else ""
)
checkpoint_entry = await self._build_widget_checkpoint_entry(
recipe_scanner,
metadata.get("checkpoint"),
)
recipe_data = {
"id": recipe_id,
@@ -493,9 +519,8 @@ class RecipePersistenceService:
"title": recipe_name,
"modified": time.time(),
"created_date": time.time(),
"base_model": most_common_base_model,
"base_model": most_common_base_model or (checkpoint_entry or {}).get("baseModel", ""),
"loras": loras_data,
"checkpoint": self._sanitize_checkpoint_entry(metadata.get("checkpoint", "")),
"gen_params": {
key: value
for key, value in metadata.items()
@@ -503,6 +528,8 @@ class RecipePersistenceService:
},
"loras_stack": lora_stack,
}
if checkpoint_entry:
recipe_data["checkpoint"] = checkpoint_entry
json_filename = f"{recipe_id}.recipe.json"
json_path = os.path.join(recipes_dir, json_filename)
@@ -524,6 +551,91 @@ class RecipePersistenceService:
# Helper methods ---------------------------------------------------
async def _build_widget_checkpoint_entry(
self,
recipe_scanner,
checkpoint_raw: Any,
) -> Optional[dict[str, Any]]:
"""Build recipe checkpoint metadata from widget generation metadata."""
if isinstance(checkpoint_raw, dict):
return self._sanitize_checkpoint_entry(checkpoint_raw)
if not isinstance(checkpoint_raw, str):
return None
checkpoint_name = checkpoint_raw.strip()
if not checkpoint_name:
return None
file_name = os.path.splitext(os.path.basename(checkpoint_name))[0]
checkpoint_info = await self._lookup_widget_checkpoint(
recipe_scanner,
checkpoint_name,
)
if not checkpoint_info:
return {
"type": "checkpoint",
"name": checkpoint_name,
"file_name": file_name,
"hash": "",
}
civitai = checkpoint_info.get("civitai") or {}
civitai_model = civitai.get("model") or {}
file_path = checkpoint_info.get("file_path") or checkpoint_info.get("path") or ""
cached_file_name = (
checkpoint_info.get("file_name")
or (os.path.splitext(os.path.basename(file_path))[0] if file_path else "")
or file_name
)
return {
"type": "checkpoint",
"modelId": civitai_model.get("id", 0),
"modelVersionId": civitai.get("id", 0),
"name": civitai_model.get("name") or checkpoint_info.get("model_name") or checkpoint_name,
"version": civitai.get("name", ""),
"hash": (checkpoint_info.get("sha256") or checkpoint_info.get("hash") or "").lower(),
"file_name": cached_file_name,
"modelName": civitai_model.get("name", ""),
"modelVersionName": civitai.get("name", ""),
"baseModel": checkpoint_info.get("base_model") or civitai.get("baseModel", ""),
}
async def _lookup_widget_checkpoint(
self,
recipe_scanner,
checkpoint_name: str,
) -> Optional[dict[str, Any]]:
lookup = getattr(recipe_scanner, "get_local_checkpoint", None)
if not callable(lookup):
return None
candidates = []
for candidate in (
checkpoint_name,
os.path.basename(checkpoint_name),
os.path.splitext(os.path.basename(checkpoint_name))[0],
):
if candidate and candidate not in candidates:
candidates.append(candidate)
for candidate in candidates:
try:
checkpoint_info = await lookup(candidate)
except Exception as exc:
self._logger.debug(
"Failed to lookup checkpoint %s while saving widget recipe: %s",
candidate,
exc,
)
continue
if checkpoint_info:
return checkpoint_info
return None
def _extract_checkpoint_entry(self, metadata: dict[str, Any]) -> Optional[dict[str, Any]]:
"""Pull a checkpoint entry from various metadata locations."""

View File

@@ -159,10 +159,51 @@ class ServiceRegistry:
return cls._services[service_name]
from .model_update_service import ModelUpdateService
from .persistent_model_cache import get_persistent_cache
from .settings_manager import get_settings_manager
cache = get_persistent_cache()
service = ModelUpdateService(cache.get_database_path())
service = ModelUpdateService(settings_manager=get_settings_manager())
cls._services[service_name] = service
logger.debug(f"Created and registered {service_name}")
return service
@classmethod
async def get_downloaded_version_history_service(cls):
"""Get or create the downloaded-version history service."""
service_name = "downloaded_version_history_service"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
if service_name in cls._services:
return cls._services[service_name]
from .downloaded_version_history_service import (
DownloadedVersionHistoryService,
)
service = DownloadedVersionHistoryService()
cls._services[service_name] = service
logger.debug(f"Created and registered {service_name}")
return service
@classmethod
async def get_backup_service(cls):
"""Get or create the backup service."""
service_name = "backup_service"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
if service_name in cls._services:
return cls._services[service_name]
from .backup_service import BackupService
service = await BackupService.get_instance()
cls._services[service_name] = service
logger.debug(f"Created and registered {service_name}")
return service

Some files were not shown because too many files have changed in this diff Show More