mirror of
https://github.com/willmiao/ComfyUI-Lora-Manager.git
synced 2026-03-22 05:32:12 -03:00
Compare commits
80 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0968698804 | ||
|
|
a5b2e9b0bf | ||
|
|
5a6ff444b9 | ||
|
|
3bb240d3c1 | ||
|
|
ee0d241c75 | ||
|
|
321ff72953 | ||
|
|
412f1e62a1 | ||
|
|
8901b32a55 | ||
|
|
8ab6cc72ad | ||
|
|
52e671638b | ||
|
|
a3070f8d82 | ||
|
|
3fde474583 | ||
|
|
1454991d6d | ||
|
|
4398851bb9 | ||
|
|
5173aa6c20 | ||
|
|
3d98572a62 | ||
|
|
c48095d9c6 | ||
|
|
1e4d1b8f15 | ||
|
|
8c037465ba | ||
|
|
055c1ca0d4 | ||
|
|
27370df93a | ||
|
|
60d23aa238 | ||
|
|
5e441d9c4f | ||
|
|
eb76468280 | ||
|
|
01bbaa31a8 | ||
|
|
bddf023dc4 | ||
|
|
8e69a247ed | ||
|
|
97141b01e1 | ||
|
|
acf610ddff | ||
|
|
a9a6f66035 | ||
|
|
0040863a03 | ||
|
|
4ab86b4ae2 | ||
|
|
b32b4b4042 | ||
|
|
4e552dcf3e | ||
|
|
8f4c02efdc | ||
|
|
b77c596f3a | ||
|
|
181f0b5626 | ||
|
|
480e5d966f | ||
|
|
e8636b949d | ||
|
|
8ea369db47 | ||
|
|
ec9b37eb53 | ||
|
|
b0847f6b87 | ||
|
|
84d10b1f3b | ||
|
|
4fdc97d062 | ||
|
|
5fe5e7ea54 | ||
|
|
7be1a2bd65 | ||
|
|
87842385c6 | ||
|
|
1dc189eb39 | ||
|
|
6120922204 | ||
|
|
ddb30dbb17 | ||
|
|
1e8bd88e28 | ||
|
|
c3a66ecf28 | ||
|
|
1f60160e8b | ||
|
|
7d560bf07a | ||
|
|
47da9949d9 | ||
|
|
68c0a5ba71 | ||
|
|
1aa81c803b | ||
|
|
8f5e134d3e | ||
|
|
ef03a2a917 | ||
|
|
e275968553 | ||
|
|
76d3aa2b5b | ||
|
|
c9a65c7347 | ||
|
|
f542ade628 | ||
|
|
d2c2bfbe6a | ||
|
|
2b6910bd55 | ||
|
|
b1dd733493 | ||
|
|
5dcf0a1e48 | ||
|
|
cf357b57fc | ||
|
|
4e1773833f | ||
|
|
8cf762ffd3 | ||
|
|
d997eaa429 | ||
|
|
8e51f0f19f | ||
|
|
f0e246b4ac | ||
|
|
a232997a79 | ||
|
|
08a449db99 | ||
|
|
0c023c9888 | ||
|
|
0ad92d00b3 | ||
|
|
a726cbea1e | ||
|
|
c53fa8692b | ||
|
|
3118f3b43c |
103
IFLOW.md
Normal file
103
IFLOW.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# ComfyUI LoRA Manager - iFlow 上下文
|
||||
|
||||
## 项目概述
|
||||
|
||||
ComfyUI LoRA Manager 是一个全面的工具集,用于简化 ComfyUI 中 LoRA 模型的组织、下载和应用。它提供了强大的功能,如配方管理、检查点组织和一键工作流集成,使模型操作更快、更流畅、更简单。
|
||||
|
||||
该项目是一个 Python 后端与 JavaScript 前端结合的 Web 应用程序,既可以作为 ComfyUI 的自定义节点运行,也可以作为独立应用程序运行。
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
D:\Workspace\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\
|
||||
├── py/ # Python 后端代码
|
||||
│ ├── config.py # 全局配置
|
||||
│ ├── lora_manager.py # 主入口点
|
||||
│ ├── controllers/ # 控制器
|
||||
│ ├── metadata_collector/ # 元数据收集器
|
||||
│ ├── middleware/ # 中间件
|
||||
│ ├── nodes/ # ComfyUI 节点
|
||||
│ ├── recipes/ # 配方相关
|
||||
│ ├── routes/ # API 路由
|
||||
│ ├── services/ # 业务逻辑服务
|
||||
│ ├── utils/ # 工具函数
|
||||
│ └── validators/ # 验证器
|
||||
├── static/ # 静态资源 (CSS, JS, 图片)
|
||||
├── templates/ # HTML 模板
|
||||
├── locales/ # 国际化文件
|
||||
├── tests/ # 测试代码
|
||||
├── standalone.py # 独立模式入口
|
||||
├── requirements.txt # Python 依赖
|
||||
├── package.json # Node.js 依赖和脚本
|
||||
└── README.md # 项目说明
|
||||
```
|
||||
|
||||
## 核心组件
|
||||
|
||||
### 后端 (Python)
|
||||
|
||||
- **主入口**: `py/lora_manager.py` 和 `standalone.py`
|
||||
- **配置**: `py/config.py` 管理全局配置和路径
|
||||
- **路由**: `py/routes/` 目录下包含各种 API 路由
|
||||
- **服务**: `py/services/` 目录下包含业务逻辑,如模型扫描、下载管理等
|
||||
- **模型管理**: 使用 `ModelServiceFactory` 来管理不同类型的模型 (LoRA, Checkpoint, Embedding)
|
||||
|
||||
### 前端 (JavaScript)
|
||||
|
||||
- **构建工具**: 使用 Node.js 和 npm 进行依赖管理和测试
|
||||
- **测试**: 使用 Vitest 进行前端测试
|
||||
|
||||
## 构建和运行
|
||||
|
||||
### 安装依赖
|
||||
|
||||
```bash
|
||||
# Python 依赖
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Node.js 依赖 (用于测试)
|
||||
npm install
|
||||
```
|
||||
|
||||
### 运行 (ComfyUI 模式)
|
||||
|
||||
作为 ComfyUI 的自定义节点安装后,在 ComfyUI 中启动即可。
|
||||
|
||||
### 运行 (独立模式)
|
||||
|
||||
```bash
|
||||
# 使用默认配置运行
|
||||
python standalone.py
|
||||
|
||||
# 指定主机和端口
|
||||
python standalone.py --host 127.0.0.1 --port 9000
|
||||
```
|
||||
|
||||
### 测试
|
||||
|
||||
#### 后端测试
|
||||
|
||||
```bash
|
||||
# 安装开发依赖
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
# 运行测试
|
||||
pytest
|
||||
```
|
||||
|
||||
#### 前端测试
|
||||
|
||||
```bash
|
||||
# 运行测试
|
||||
npm run test
|
||||
|
||||
# 运行测试并生成覆盖率报告
|
||||
npm run test:coverage
|
||||
```
|
||||
|
||||
## 开发约定
|
||||
|
||||
- **代码风格**: Python 代码应遵循 PEP 8 规范
|
||||
- **测试**: 新功能应包含相应的单元测试
|
||||
- **配置**: 使用 `settings.json` 文件进行用户配置
|
||||
- **日志**: 使用 Python 标准库 `logging` 模块进行日志记录
|
||||
17
README.md
17
README.md
@@ -34,15 +34,16 @@ Enhance your Civitai browsing experience with our companion browser extension! S
|
||||
|
||||
## Release Notes
|
||||
|
||||
### v0.9.6
|
||||
* **Critical Performance Optimization** - Introduced persistent model cache that dramatically accelerates initialization after startup and significantly reduces Python backend memory footprint for improved application performance.
|
||||
* **Cross-Browser Settings Synchronization** - Migrated nearly all settings to the backend, ensuring your preferences sync automatically across all browsers for a seamless multi-browser experience.
|
||||
* **Protected User Settings Location** - Relocated user settings (settings.json) to the user config directory (accessible via the link icon in Settings), preventing accidental deletion during reinstalls or updates.
|
||||
* **Global Context Menu** - Added a new global context menu accessible by right-clicking on empty page areas, providing quick access to global operations with more features coming in future updates.
|
||||
* **Multi-Library Support** - Introduced support for managing multiple libraries, allowing you to easily switch between different model collections (advanced usage, documentation in progress).
|
||||
* **Bug Fixes & Stability Improvements** - Various bug fixes and enhancements for improved stability and reliability.
|
||||
### v0.9.8
|
||||
* **Full CivArchive API Support** - Added complete support for the CivArchive API as a fallback metadata source beyond Civitai API. Models deleted from Civitai can now still retrieve metadata through the CivArchive API.
|
||||
* **Download Models from CivArchive** - Added support for downloading models directly from CivArchive, similar to downloading from Civitai. Simply click the Download button and paste the model URL to download the corresponding model.
|
||||
* **Custom Priority Tags** - Introduced Custom Priority Tags feature, allowing users to define custom priority tags. These tags will appear as suggestions when editing tags or during auto organization/download using default paths, providing more precise and controlled folder organization. [Guide](https://github.com/willmiao/ComfyUI-Lora-Manager/wiki/Priority-Tags-Configuration-Guide)
|
||||
* **Drag and Drop Tag Reordering** - Added drag and drop functionality to reorder tags in the tags edit mode for improved usability.
|
||||
* **Download Control in Example Images Panel** - Added stop control in the Download Example Images Panel for better download management.
|
||||
* **Prompt (LoraManager) Node with Autocomplete** - Added new Prompt (LoraManager) node with autocomplete feature for adding embeddings.
|
||||
* **Lora Manager Nodes in Subgraphs** - Lora Manager nodes now support being placed within subgraphs for more flexible workflow organization.
|
||||
|
||||
### v0.9.3
|
||||
### v0.9.6
|
||||
* **Metadata Archive Database Support** - Added the ability to download and utilize a metadata archive database, enabling access to metadata for models that have been deleted from CivitAI.
|
||||
* **App-Level Proxy Settings** - Introduced support for configuring a global proxy within the application, making it easier to use the manager behind network restrictions.
|
||||
* **Bug Fixes** - Various bug fixes for improved stability and reliability.
|
||||
|
||||
@@ -2,6 +2,7 @@ try: # pragma: no cover - import fallback for pytest collection
|
||||
from .py.lora_manager import LoraManager
|
||||
from .py.nodes.lora_loader import LoraManagerLoader, LoraManagerTextLoader
|
||||
from .py.nodes.trigger_word_toggle import TriggerWordToggle
|
||||
from .py.nodes.prompt import PromptLoraManager
|
||||
from .py.nodes.lora_stacker import LoraStacker
|
||||
from .py.nodes.save_image import SaveImage
|
||||
from .py.nodes.debug_metadata import DebugMetadata
|
||||
@@ -17,6 +18,7 @@ except ImportError: # pragma: no cover - allows running under pytest without pa
|
||||
if str(package_root) not in sys.path:
|
||||
sys.path.append(str(package_root))
|
||||
|
||||
PromptLoraManager = importlib.import_module("py.nodes.prompt").PromptLoraManager
|
||||
LoraManager = importlib.import_module("py.lora_manager").LoraManager
|
||||
LoraManagerLoader = importlib.import_module("py.nodes.lora_loader").LoraManagerLoader
|
||||
LoraManagerTextLoader = importlib.import_module("py.nodes.lora_loader").LoraManagerTextLoader
|
||||
@@ -29,6 +31,7 @@ except ImportError: # pragma: no cover - allows running under pytest without pa
|
||||
init_metadata_collector = importlib.import_module("py.metadata_collector").init
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
PromptLoraManager.NAME: PromptLoraManager,
|
||||
LoraManagerLoader.NAME: LoraManagerLoader,
|
||||
LoraManagerTextLoader.NAME: LoraManagerTextLoader,
|
||||
TriggerWordToggle.NAME: TriggerWordToggle,
|
||||
|
||||
46
docs/custom_priority_tags_format.md
Normal file
46
docs/custom_priority_tags_format.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Custom Priority Tag Format Proposal
|
||||
|
||||
To support user-defined priority tags with flexible aliasing across different model types, the configuration will be stored as editable strings. The format balances readability with enough structure for parsing on both the backend and frontend.
|
||||
|
||||
## Format Overview
|
||||
|
||||
- Each model type is declared on its own line: `model_type: entries`.
|
||||
- Entries are comma-separated and ordered by priority from highest to lowest.
|
||||
- An entry may be a single canonical tag (e.g., `realistic`) or a canonical tag with aliases.
|
||||
- Canonical tags define the final folder name that should be used when matching that entry.
|
||||
- Aliases are enclosed in parentheses and separated by `|` (vertical bar).
|
||||
- All matching is case-insensitive; stored canonical names preserve the user-specified casing for folder creation and UI suggestions.
|
||||
|
||||
### Grammar
|
||||
|
||||
```
|
||||
priority-config := model-config { "\n" model-config }
|
||||
model-config := model-type ":" entry-list
|
||||
model-type := <identifier without spaces>
|
||||
entry-list := entry { "," entry }
|
||||
entry := canonical [ "(" alias { "|" alias } ")" ]
|
||||
canonical := <tag text without parentheses or commas>
|
||||
alias := <tag text without parentheses, commas, or pipes>
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
```
|
||||
lora: celebrity(celeb|celebrity), stylized, character(char)
|
||||
checkpoint: realistic(realism|realistic), anime(anime-style|toon)
|
||||
embedding: face, celeb(celebrity|celeb)
|
||||
```
|
||||
|
||||
## Parsing Notes
|
||||
|
||||
- Whitespace around separators is ignored to make manual editing more forgiving.
|
||||
- Duplicate canonical tags within the same model type collapse to a single entry; the first definition wins.
|
||||
- Aliases map to their canonical tag. When generating folder names, the canonical form is used.
|
||||
- Tags that do not match any alias or canonical entry fall back to the first tag in the model's tag list, preserving current behavior.
|
||||
|
||||
## Usage
|
||||
|
||||
- **Backend:** Convert each model type's string into an ordered list of canonical tags with alias sets. During path generation, iterate by priority order and match tags against both canonical names and their aliases.
|
||||
- **Frontend:** Surface canonical tags as suggestions, optionally displaying aliases in tooltips or secondary text. Input validation should warn about duplicate aliases within the same model type.
|
||||
|
||||
This format allows users to customize priority tag handling per model type while keeping editing simple and avoiding proliferation of folder names through alias normalization.
|
||||
71
docs/priority_tags_help.md
Normal file
71
docs/priority_tags_help.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Priority Tags Configuration Guide
|
||||
|
||||
This guide explains how to tailor the tag priority order that powers folder naming and tag suggestions in the LoRA Manager. You only need to edit the comma-separated list of entries shown in the **Priority Tags** field for each model type.
|
||||
|
||||
## 1. Pick the Model Type
|
||||
|
||||
In the **Priority Tags** dialog you will find one tab per model type (LoRA, Checkpoint, Embedding). Select the tab you want to update; changes on one tab do not affect the others.
|
||||
|
||||
## 2. Edit the Entry List
|
||||
|
||||
Inside the textarea you will see a line similar to:
|
||||
|
||||
```
|
||||
character, concept, style(toon|toon_style)
|
||||
```
|
||||
|
||||
This entire line is the **entry list**. Replace it with your own ordered list.
|
||||
|
||||
### Entry Rules
|
||||
|
||||
Each entry is separated by a comma, in order from highest to lowest priority:
|
||||
|
||||
- **Canonical tag only:** `realistic`
|
||||
- **Canonical tag with aliases:** `character(char|chars)`
|
||||
|
||||
Aliases live inside `()` and are separated with `|`. The canonical name is what appears in folder names and UI suggestions when any of the aliases are detected. Matching is case-insensitive.
|
||||
|
||||
## Use `{first_tag}` in Path Templates
|
||||
|
||||
When your path template contains `{first_tag}`, the app picks a folder name based on your priority list and the model’s own tags:
|
||||
|
||||
- It checks the priority list from top to bottom. If a canonical tag or any of its aliases appear in the model tags, that canonical name becomes the folder name.
|
||||
- If no priority tags are found but the model has tags, the very first model tag is used.
|
||||
- If the model has no tags at all, the folder falls back to `no tags`.
|
||||
|
||||
### Example
|
||||
|
||||
With a template like `/{model_type}/{first_tag}` and the priority entry list `character(char|chars), style(anime|toon)`:
|
||||
|
||||
| Model Tags | Folder Name | Why |
|
||||
| --- | --- | --- |
|
||||
| `["chars", "female"]` | `character` | `chars` matches the `character` alias, so the canonical wins. |
|
||||
| `["anime", "portrait"]` | `style` | `anime` hits the `style` entry, so its canonical label is used. |
|
||||
| `["portrait", "bw"]` | `portrait` | No priority match, so the first model tag is used. |
|
||||
| `[]` | `no tags` | Nothing to match, so the fallback is applied. |
|
||||
|
||||
## 3. Save the Settings
|
||||
|
||||
After editing the entry list, press **Enter** to save. Use **Shift+Enter** whenever you need a new line. Clicking outside the field also saves automatically. A success toast confirms the update.
|
||||
|
||||
## Examples
|
||||
|
||||
| Goal | Entry List |
|
||||
| --- | --- |
|
||||
| Prefer people over styles | `character, portraits, style(anime\|toon)` |
|
||||
| Group sci-fi variants | `sci-fi(scifi\|science_fiction), cyberpunk(cyber\|punk)` |
|
||||
| Alias shorthand tags | `realistic(real\|realisim), photorealistic(photo_real)` |
|
||||
|
||||
## Tips
|
||||
|
||||
- Keep canonical names short and meaningful—they become folder names.
|
||||
- Place the most important categories first; the first match wins.
|
||||
- Avoid duplicate canonical names within the same list; only the first instance is used.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Unexpected folder name?** Check that the canonical name you want is placed before other matches.
|
||||
- **Alias not working?** Ensure the alias is inside parentheses and separated with `|`, e.g. `character(char|chars)`.
|
||||
- **Validation error?** Look for missing parentheses or stray commas. Each entry must follow the `canonical(alias|alias)` pattern or just `canonical`.
|
||||
|
||||
With these basics you can quickly adapt Priority Tags to match your library’s organization style.
|
||||
26
docs/testing/coverage_analysis.md
Normal file
26
docs/testing/coverage_analysis.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Backend Test Coverage Notes
|
||||
|
||||
## Pytest Execution
|
||||
- Command: `python -m pytest`
|
||||
- Result: All 283 collected tests passed in the current environment.
|
||||
- Coverage tooling (``pytest-cov``/``coverage``) is unavailable in the offline sandbox, so line-level metrics could not be generated. The earlier attempt to install ``pytest-cov`` failed because the package index cannot be reached from the container.
|
||||
|
||||
## High-Priority Gaps to Address
|
||||
|
||||
### 1. Standalone server bootstrapping
|
||||
* **Source:** [`standalone.py`](../../standalone.py)
|
||||
* **Why it matters:** The standalone entry point wires together the aiohttp application, static asset routes, model-route registration, and configuration validation. None of these behaviours are covered by automated tests, leaving regressions in bootstrapping logic undetected.
|
||||
* **Suggested coverage:** Add integration-style tests that instantiate `StandaloneServer`/`StandaloneLoraManager` with temporary settings and assert that routes (HTTP + websocket) are registered, configuration warnings fire for missing paths, and the mock ComfyUI shims behave as expected.
|
||||
|
||||
### 2. Model service registration factory
|
||||
* **Source:** [`py/services/model_service_factory.py`](../../py/services/model_service_factory.py)
|
||||
* **Why it matters:** The factory coordinates which model services and routes the API exposes, including error handling when unknown model types are requested. No current tests verify registration, memoization of route instances, or the logging path on failures.
|
||||
* **Suggested coverage:** Unit tests that exercise `register_model_type`, `get_route_instance`, error branches in `get_service_class`/`get_route_class`, and `setup_all_routes` when a route setup raises. Use lightweight fakes to confirm the logger is called and state is cleared via `clear_registrations`.
|
||||
|
||||
### 3. Server-side i18n helper
|
||||
* **Source:** [`py/services/server_i18n.py`](../../py/services/server_i18n.py)
|
||||
* **Why it matters:** Template rendering relies on the `ServerI18nManager` to load locale JSON, perform key lookups, and format parameters. The fallback logic (dot-notation lookup, English fallbacks, placeholder substitution) is untested, so malformed locale files or regressions in placeholder handling would slip through.
|
||||
* **Suggested coverage:** Tests that load fixture locale dictionaries, assert `set_locale` fallbacks, verify nested key resolution and placeholder substitution, and ensure missing keys return the original identifier.
|
||||
|
||||
## Next Steps
|
||||
Prioritize creating focused unit tests around these modules, then re-run pytest once coverage tooling is available to confirm the new tests close the identified gaps.
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 Bytes",
|
||||
@@ -199,6 +199,7 @@
|
||||
"videoSettings": "Video-Einstellungen",
|
||||
"layoutSettings": "Layout-Einstellungen",
|
||||
"folderSettings": "Ordner-Einstellungen",
|
||||
"priorityTags": "Prioritäts-Tags",
|
||||
"downloadPathTemplates": "Download-Pfad-Vorlagen",
|
||||
"exampleImages": "Beispielbilder",
|
||||
"misc": "Verschiedenes",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "Wählen Sie, wie viele Karten pro Zeile angezeigt werden sollen:",
|
||||
"displayDensityDetails": {
|
||||
"default": "Standard: 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "Mittel: 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "Kompakt: 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "Warnung: Höhere Dichten können bei Systemen mit begrenzten Ressourcen zu Performance-Problemen führen.",
|
||||
"cardInfoDisplay": "Karten-Info-Anzeige",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "Wählen Sie, wann Modellinformationen und Aktionsschaltflächen angezeigt werden sollen:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "Immer sichtbar: Kopf- und Fußzeilen sind immer sichtbar",
|
||||
"hover": "Bei Hover anzeigen: Kopf- und Fußzeilen erscheinen nur beim Darüberfahren mit der Maus"
|
||||
"always": "Kopf- und Fußzeilen sind immer sichtbar",
|
||||
"hover": "Kopf- und Fußzeilen erscheinen nur beim Darüberfahren mit der Maus"
|
||||
},
|
||||
"modelNameDisplay": "Anzeige des Modellnamens",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "Modellname",
|
||||
"fileName": "Dateiname"
|
||||
},
|
||||
"modelNameDisplayHelp": "Wählen Sie aus, was in der Fußzeile der Modellkarte angezeigt werden soll:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "Den beschreibenden Namen des Modells anzeigen",
|
||||
"fileName": "Den tatsächlichen Dateinamen auf der Festplatte anzeigen"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -253,6 +264,26 @@
|
||||
"defaultEmbeddingRootHelp": "Legen Sie den Standard-Embedding-Stammordner für Downloads, Importe und Verschiebungen fest",
|
||||
"noDefault": "Kein Standard"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "Prioritäts-Tags",
|
||||
"description": "Passen Sie die Tag-Prioritätsreihenfolge für jeden Modelltyp an (z. B. character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "Prioritäts-Tags-Hilfe öffnen",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "Prioritäts-Tags aktualisiert.",
|
||||
"saveError": "Prioritäts-Tags konnten nicht aktualisiert werden.",
|
||||
"loadingSuggestions": "Lade Vorschläge...",
|
||||
"validation": {
|
||||
"missingClosingParen": "Eintrag {index} fehlt eine schließende Klammer.",
|
||||
"missingCanonical": "Eintrag {index} muss einen kanonischen Tag-Namen enthalten.",
|
||||
"duplicateCanonical": "Der kanonische Tag \"{tag}\" kommt mehrfach vor.",
|
||||
"unknown": "Ungültige Prioritäts-Tag-Konfiguration."
|
||||
}
|
||||
},
|
||||
"downloadPathTemplates": {
|
||||
"title": "Download-Pfad-Vorlagen",
|
||||
"help": "Konfigurieren Sie Ordnerstrukturen für verschiedene Modelltypen beim Herunterladen von Civitai.",
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Embedding-Modelle"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Modell-Stammverzeichnis",
|
||||
"modelRoot": "Stammverzeichnis",
|
||||
"collapseAll": "Alle Ordner einklappen",
|
||||
"pinSidebar": "Sidebar anheften",
|
||||
"unpinSidebar": "Sidebar lösen",
|
||||
"switchToListView": "Zur Listenansicht wechseln",
|
||||
"switchToTreeView": "Zur Baumansicht wechseln",
|
||||
"collapseAllDisabled": "Im Listenmodus nicht verfügbar"
|
||||
"recursiveOn": "Unterordner durchsuchen",
|
||||
"recursiveOff": "Nur aktuellen Ordner durchsuchen",
|
||||
"recursiveUnavailable": "Rekursive Suche ist nur in der Baumansicht verfügbar",
|
||||
"collapseAllDisabled": "Im Listenmodus nicht verfügbar",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "Zielpfad für das Verschieben konnte nicht ermittelt werden."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "Statistiken",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "Vorschaubild heruntergeladen",
|
||||
"downloadingFile": "{type}-Datei wird heruntergeladen",
|
||||
"finalizing": "Download wird abgeschlossen..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "Aktuelle Datei:",
|
||||
"downloading": "Wird heruntergeladen: {name}",
|
||||
"transferred": "Heruntergeladen: {downloaded} / {total}",
|
||||
"transferredSimple": "Heruntergeladen: {downloaded}",
|
||||
"transferredUnknown": "Heruntergeladen: --",
|
||||
"speed": "Geschwindigkeit: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "Fehler beim Pausieren des Downloads: {error}",
|
||||
"downloadResumed": "Download fortgesetzt",
|
||||
"resumeFailed": "Fehler beim Fortsetzen des Downloads: {error}",
|
||||
"downloadStopped": "Download abgebrochen",
|
||||
"stopFailed": "Download konnte nicht abgebrochen werden: {error}",
|
||||
"deleted": "Beispielbild gelöscht",
|
||||
"deleteFailed": "Fehler beim Löschen des Beispielbilds",
|
||||
"setPreviewFailed": "Fehler beim Setzen des Vorschaubilds"
|
||||
|
||||
@@ -199,6 +199,7 @@
|
||||
"videoSettings": "Video Settings",
|
||||
"layoutSettings": "Layout Settings",
|
||||
"folderSettings": "Folder Settings",
|
||||
"priorityTags": "Priority Tags",
|
||||
"downloadPathTemplates": "Download Path Templates",
|
||||
"exampleImages": "Example Images",
|
||||
"misc": "Misc.",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "Choose how many cards to display per row:",
|
||||
"displayDensityDetails": {
|
||||
"default": "Default: 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "Medium: 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "Compact: 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "Warning: Higher densities may cause performance issues on systems with limited resources.",
|
||||
"cardInfoDisplay": "Card Info Display",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "Choose when to display model information and action buttons:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "Always Visible: Headers and footers are always visible",
|
||||
"hover": "Reveal on Hover: Headers and footers only appear when hovering over a card"
|
||||
"always": "Headers and footers are always visible",
|
||||
"hover": "Headers and footers only appear when hovering over a card"
|
||||
},
|
||||
"modelNameDisplay": "Model Name Display",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "Model Name",
|
||||
"fileName": "File Name"
|
||||
},
|
||||
"modelNameDisplayHelp": "Choose what to display in the model card footer:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "Display the model's descriptive name",
|
||||
"fileName": "Display the actual file name on disk"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -253,6 +264,26 @@
|
||||
"defaultEmbeddingRootHelp": "Set the default embedding root directory for downloads, imports and moves",
|
||||
"noDefault": "No Default"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "Priority Tags",
|
||||
"description": "Customize the tag priority order for each model type (e.g., character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "Open priority tags help",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "Priority tags updated.",
|
||||
"saveError": "Failed to update priority tags.",
|
||||
"loadingSuggestions": "Loading suggestions...",
|
||||
"validation": {
|
||||
"missingClosingParen": "Entry {index} is missing a closing parenthesis.",
|
||||
"missingCanonical": "Entry {index} must include a canonical tag name.",
|
||||
"duplicateCanonical": "The canonical tag \"{tag}\" appears more than once.",
|
||||
"unknown": "Invalid priority tag configuration."
|
||||
}
|
||||
},
|
||||
"downloadPathTemplates": {
|
||||
"title": "Download Path Templates",
|
||||
"help": "Configure folder structures for different model types when downloading from Civitai.",
|
||||
@@ -391,14 +422,14 @@
|
||||
"selected": "{count} selected",
|
||||
"selectedSuffix": "selected",
|
||||
"viewSelected": "View Selected",
|
||||
"addTags": "Add Tags to All",
|
||||
"setBaseModel": "Set Base Model for All",
|
||||
"setContentRating": "Set Content Rating for All",
|
||||
"copyAll": "Copy All Syntax",
|
||||
"refreshAll": "Refresh All Metadata",
|
||||
"moveAll": "Move All to Folder",
|
||||
"addTags": "Add Tags to Selected",
|
||||
"setBaseModel": "Set Base Model for Selected",
|
||||
"setContentRating": "Set Content Rating for Selected",
|
||||
"copyAll": "Copy Selected Syntax",
|
||||
"refreshAll": "Refresh Selected Metadata",
|
||||
"moveAll": "Move Selected to Folder",
|
||||
"autoOrganize": "Auto-Organize Selected",
|
||||
"deleteAll": "Delete All Models",
|
||||
"deleteAll": "Delete Selected Models",
|
||||
"clear": "Clear Selection",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Initializing auto-organize...",
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Embedding Models"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Model Root",
|
||||
"modelRoot": "Root",
|
||||
"collapseAll": "Collapse All Folders",
|
||||
"pinSidebar": "Pin Sidebar",
|
||||
"unpinSidebar": "Unpin Sidebar",
|
||||
"switchToListView": "Switch to List View",
|
||||
"switchToTreeView": "Switch to Tree View",
|
||||
"collapseAllDisabled": "Not available in list view"
|
||||
"recursiveOn": "Search subfolders",
|
||||
"recursiveOff": "Search current folder only",
|
||||
"recursiveUnavailable": "Recursive search is available in tree view only",
|
||||
"collapseAllDisabled": "Not available in list view",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "Unable to determine destination path for move."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "Statistics",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "Downloaded preview image",
|
||||
"downloadingFile": "Downloading {type} file",
|
||||
"finalizing": "Finalizing download..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "Current file:",
|
||||
"downloading": "Downloading: {name}",
|
||||
"transferred": "Transferred: {downloaded} / {total}",
|
||||
"transferredSimple": "Transferred: {downloaded}",
|
||||
"transferredUnknown": "Transferred: --",
|
||||
"speed": "Speed: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "Failed to pause download: {error}",
|
||||
"downloadResumed": "Download resumed",
|
||||
"resumeFailed": "Failed to resume download: {error}",
|
||||
"downloadStopped": "Download cancelled",
|
||||
"stopFailed": "Failed to cancel download: {error}",
|
||||
"deleted": "Example image deleted",
|
||||
"deleteFailed": "Failed to delete example image",
|
||||
"setPreviewFailed": "Failed to set preview image"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 Bytes",
|
||||
@@ -199,6 +199,7 @@
|
||||
"videoSettings": "Configuración de video",
|
||||
"layoutSettings": "Configuración de diseño",
|
||||
"folderSettings": "Configuración de carpetas",
|
||||
"priorityTags": "Etiquetas prioritarias",
|
||||
"downloadPathTemplates": "Plantillas de rutas de descarga",
|
||||
"exampleImages": "Imágenes de ejemplo",
|
||||
"misc": "Varios",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "Elige cuántas tarjetas mostrar por fila:",
|
||||
"displayDensityDetails": {
|
||||
"default": "Predeterminado: 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "Medio: 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "Compacto: 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "Advertencia: Densidades más altas pueden causar problemas de rendimiento en sistemas con recursos limitados.",
|
||||
"cardInfoDisplay": "Visualización de información de tarjeta",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "Elige cuándo mostrar información del modelo y botones de acción:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "Siempre visible: Los encabezados y pies de página siempre son visibles",
|
||||
"hover": "Mostrar al pasar el ratón: Los encabezados y pies de página solo aparecen al pasar el ratón sobre una tarjeta"
|
||||
"always": "Los encabezados y pies de página siempre son visibles",
|
||||
"hover": "Los encabezados y pies de página solo aparecen al pasar el ratón sobre una tarjeta"
|
||||
},
|
||||
"modelNameDisplay": "Visualización del nombre del modelo",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "Nombre del modelo",
|
||||
"fileName": "Nombre del archivo"
|
||||
},
|
||||
"modelNameDisplayHelp": "Elige qué mostrar en el pie de la tarjeta del modelo:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "Mostrar el nombre descriptivo del modelo",
|
||||
"fileName": "Mostrar el nombre real del archivo en el disco"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -253,6 +264,26 @@
|
||||
"defaultEmbeddingRootHelp": "Establecer el directorio raíz predeterminado de embedding para descargas, importaciones y movimientos",
|
||||
"noDefault": "Sin predeterminado"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "Etiquetas prioritarias",
|
||||
"description": "Personaliza el orden de prioridad de etiquetas para cada tipo de modelo (p. ej., character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "Abrir ayuda de etiquetas prioritarias",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "Etiquetas prioritarias actualizadas.",
|
||||
"saveError": "Error al actualizar las etiquetas prioritarias.",
|
||||
"loadingSuggestions": "Cargando sugerencias...",
|
||||
"validation": {
|
||||
"missingClosingParen": "A la entrada {index} le falta un paréntesis de cierre.",
|
||||
"missingCanonical": "La entrada {index} debe incluir un nombre de etiqueta canónica.",
|
||||
"duplicateCanonical": "La etiqueta canónica \"{tag}\" aparece más de una vez.",
|
||||
"unknown": "Configuración de etiquetas prioritarias no válida."
|
||||
}
|
||||
},
|
||||
"downloadPathTemplates": {
|
||||
"title": "Plantillas de rutas de descarga",
|
||||
"help": "Configurar estructuras de carpetas para diferentes tipos de modelos al descargar de Civitai.",
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Modelos embedding"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Raíz del modelo",
|
||||
"modelRoot": "Raíz",
|
||||
"collapseAll": "Colapsar todas las carpetas",
|
||||
"pinSidebar": "Fijar barra lateral",
|
||||
"unpinSidebar": "Desfijar barra lateral",
|
||||
"switchToListView": "Cambiar a vista de lista",
|
||||
"switchToTreeView": "Cambiar a vista de árbol",
|
||||
"collapseAllDisabled": "No disponible en vista de lista"
|
||||
"recursiveOn": "Buscar en subcarpetas",
|
||||
"recursiveOff": "Buscar solo en la carpeta actual",
|
||||
"recursiveUnavailable": "La búsqueda recursiva solo está disponible en la vista en árbol",
|
||||
"collapseAllDisabled": "No disponible en vista de lista",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "No se puede determinar la ruta de destino para el movimiento."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "Estadísticas",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "Imagen de vista previa descargada",
|
||||
"downloadingFile": "Descargando archivo de {type}",
|
||||
"finalizing": "Finalizando descarga..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "Archivo actual:",
|
||||
"downloading": "Descargando: {name}",
|
||||
"transferred": "Descargado: {downloaded} / {total}",
|
||||
"transferredSimple": "Descargado: {downloaded}",
|
||||
"transferredUnknown": "Descargado: --",
|
||||
"speed": "Velocidad: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "Error al pausar descarga: {error}",
|
||||
"downloadResumed": "Descarga reanudada",
|
||||
"resumeFailed": "Error al reanudar descarga: {error}",
|
||||
"downloadStopped": "Descarga cancelada",
|
||||
"stopFailed": "Error al cancelar descarga: {error}",
|
||||
"deleted": "Imagen de ejemplo eliminada",
|
||||
"deleteFailed": "Error al eliminar imagen de ejemplo",
|
||||
"setPreviewFailed": "Error al establecer imagen de vista previa"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 Octets",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "Images d'exemple",
|
||||
"misc": "Divers",
|
||||
"metadataArchive": "Base de données d'archive des métadonnées",
|
||||
"proxySettings": "Paramètres du proxy"
|
||||
"proxySettings": "Paramètres du proxy",
|
||||
"priorityTags": "Étiquettes prioritaires"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "Flouter le contenu NSFW",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "Choisissez combien de cartes afficher par ligne :",
|
||||
"displayDensityDetails": {
|
||||
"default": "Par défaut : 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "Moyen : 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "Compact : 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "Attention : Des densités plus élevées peuvent causer des problèmes de performance sur les systèmes avec des ressources limitées.",
|
||||
"cardInfoDisplay": "Affichage des informations de carte",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "Choisissez quand afficher les informations du modèle et les boutons d'action :",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "Toujours visible : Les en-têtes et pieds de page sont toujours visibles",
|
||||
"hover": "Révéler au survol : Les en-têtes et pieds de page n'apparaissent qu'au survol d'une carte"
|
||||
"always": "Les en-têtes et pieds de page sont toujours visibles",
|
||||
"hover": "Les en-têtes et pieds de page n'apparaissent qu'au survol d'une carte"
|
||||
},
|
||||
"modelNameDisplay": "Affichage du nom du modèle",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "Nom du modèle",
|
||||
"fileName": "Nom du fichier"
|
||||
},
|
||||
"modelNameDisplayHelp": "Choisissez ce qui doit être affiché dans le pied de page de la carte du modèle :",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "Afficher le nom descriptif du modèle",
|
||||
"fileName": "Afficher le nom réel du fichier sur le disque"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "Mot de passe (optionnel)",
|
||||
"proxyPasswordPlaceholder": "mot_de_passe",
|
||||
"proxyPasswordHelp": "Mot de passe pour l'authentification proxy (si nécessaire)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "Étiquettes prioritaires",
|
||||
"description": "Personnalisez l'ordre de priorité des étiquettes pour chaque type de modèle (par ex. : character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "Ouvrir l'aide sur les étiquettes prioritaires",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "Étiquettes prioritaires mises à jour.",
|
||||
"saveError": "Échec de la mise à jour des étiquettes prioritaires.",
|
||||
"loadingSuggestions": "Chargement des suggestions...",
|
||||
"validation": {
|
||||
"missingClosingParen": "L'entrée {index} n'a pas de parenthèse fermante.",
|
||||
"missingCanonical": "L'entrée {index} doit inclure un nom d'étiquette canonique.",
|
||||
"duplicateCanonical": "L'étiquette canonique \"{tag}\" apparaît plusieurs fois.",
|
||||
"unknown": "Configuration d'étiquettes prioritaires invalide."
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Modèles Embedding"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Racine du modèle",
|
||||
"modelRoot": "Racine",
|
||||
"collapseAll": "Réduire tous les dossiers",
|
||||
"pinSidebar": "Épingler la barre latérale",
|
||||
"unpinSidebar": "Désépingler la barre latérale",
|
||||
"switchToListView": "Passer en vue liste",
|
||||
"switchToTreeView": "Passer en vue arborescence",
|
||||
"collapseAllDisabled": "Non disponible en vue liste"
|
||||
"recursiveOn": "Rechercher dans les sous-dossiers",
|
||||
"recursiveOff": "Rechercher uniquement dans le dossier actuel",
|
||||
"recursiveUnavailable": "La recherche récursive n'est disponible qu'en vue arborescente",
|
||||
"collapseAllDisabled": "Non disponible en vue liste",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "Impossible de déterminer le chemin de destination pour le déplacement."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "Statistiques",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "Image d'aperçu téléchargée",
|
||||
"downloadingFile": "Téléchargement du fichier {type}",
|
||||
"finalizing": "Finalisation du téléchargement..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "Fichier actuel :",
|
||||
"downloading": "Téléchargement : {name}",
|
||||
"transferred": "Téléchargé : {downloaded} / {total}",
|
||||
"transferredSimple": "Téléchargé : {downloaded}",
|
||||
"transferredUnknown": "Téléchargé : --",
|
||||
"speed": "Vitesse : {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "Échec de la mise en pause du téléchargement : {error}",
|
||||
"downloadResumed": "Téléchargement repris",
|
||||
"resumeFailed": "Échec de la reprise du téléchargement : {error}",
|
||||
"downloadStopped": "Téléchargement annulé",
|
||||
"stopFailed": "Échec de l'annulation du téléchargement : {error}",
|
||||
"deleted": "Image d'exemple supprimée",
|
||||
"deleteFailed": "Échec de la suppression de l'image d'exemple",
|
||||
"setPreviewFailed": "Échec de la définition de l'image d'aperçu"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 בתים",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "תמונות דוגמה",
|
||||
"misc": "שונות",
|
||||
"metadataArchive": "מסד נתונים של ארכיון מטא-דאטה",
|
||||
"proxySettings": "הגדרות פרוקסי"
|
||||
"proxySettings": "הגדרות פרוקסי",
|
||||
"priorityTags": "תגיות עדיפות"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "טשטש תוכן NSFW",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "בחר כמה כרטיסים להציג בכל שורה:",
|
||||
"displayDensityDetails": {
|
||||
"default": "ברירת מחדל: 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "בינוני: 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "קומפקטי: 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "אזהרה: צפיפויות גבוהות יותר עלולות לגרום לבעיות ביצועים במערכות עם משאבים מוגבלים.",
|
||||
"cardInfoDisplay": "תצוגת מידע בכרטיס",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "בחר מתי להציג מידע על המודל וכפתורי פעולה:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "תמיד גלוי: כותרות עליונות ותחתונות תמיד גלויות",
|
||||
"hover": "חשוף בריחוף: כותרות עליונות ותחתונות מופיעות רק בעת ריחוף מעל כרטיס"
|
||||
"always": "כותרות עליונות ותחתונות תמיד גלויות",
|
||||
"hover": "כותרות עליונות ותחתונות מופיעות רק בעת ריחוף מעל כרטיס"
|
||||
},
|
||||
"modelNameDisplay": "תצוגת שם מודל",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "שם מודל",
|
||||
"fileName": "שם קובץ"
|
||||
},
|
||||
"modelNameDisplayHelp": "בחר מה להציג בכותרת התחתונה של כרטיס המודל:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "הצג את השם התיאורי של המודל",
|
||||
"fileName": "הצג את שם הקובץ בפועל בדיסק"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "סיסמה (אופציונלי)",
|
||||
"proxyPasswordPlaceholder": "password",
|
||||
"proxyPasswordHelp": "סיסמה לאימות מול הפרוקסי (אם נדרש)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "תגיות עדיפות",
|
||||
"description": "התאם את סדר העדיפות של התגיות עבור כל סוג מודל (לדוגמה: character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "פתח עזרה בנושא תגיות עדיפות",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "תגיות העדיפות עודכנו.",
|
||||
"saveError": "עדכון תגיות העדיפות נכשל.",
|
||||
"loadingSuggestions": "טוען הצעות...",
|
||||
"validation": {
|
||||
"missingClosingParen": "לרשומה {index} חסר סוגר סוגריים.",
|
||||
"missingCanonical": "על הרשומה {index} לכלול שם תגית קנונית.",
|
||||
"duplicateCanonical": "התגית הקנונית \"{tag}\" מופיעה יותר מפעם אחת.",
|
||||
"unknown": "תצורת תגיות העדיפות שגויה."
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "מודלי Embedding"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "שורש המודלים",
|
||||
"modelRoot": "שורש",
|
||||
"collapseAll": "כווץ את כל התיקיות",
|
||||
"pinSidebar": "נעל סרגל צד",
|
||||
"unpinSidebar": "שחרר סרגל צד",
|
||||
"switchToListView": "עבור לתצוגת רשימה",
|
||||
"switchToTreeView": "עבור לתצוגת עץ",
|
||||
"collapseAllDisabled": "לא זמין בתצוגת רשימה"
|
||||
"switchToTreeView": "תצוגת עץ",
|
||||
"recursiveOn": "חיפוש בתיקיות משנה",
|
||||
"recursiveOff": "חיפוש רק בתיקייה הנוכחית",
|
||||
"recursiveUnavailable": "חיפוש רקורסיבי זמין רק בתצוגת עץ",
|
||||
"collapseAllDisabled": "לא זמין בתצוגת רשימה",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "לא ניתן לקבוע את נתיב היעד להעברה."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "סטטיסטיקה",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "תמונת תצוגה מקדימה הורדה",
|
||||
"downloadingFile": "מוריד קובץ {type}",
|
||||
"finalizing": "מסיים הורדה..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "הקובץ הנוכחי:",
|
||||
"downloading": "מוריד: {name}",
|
||||
"transferred": "הורד: {downloaded} / {total}",
|
||||
"transferredSimple": "הורד: {downloaded}",
|
||||
"transferredUnknown": "הורד: --",
|
||||
"speed": "מהירות: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "השהיית ההורדה נכשלה: {error}",
|
||||
"downloadResumed": "ההורדה חודשה",
|
||||
"resumeFailed": "חידוש ההורדה נכשל: {error}",
|
||||
"downloadStopped": "ההורדה בוטלה",
|
||||
"stopFailed": "נכשל בביטול ההורדה: {error}",
|
||||
"deleted": "תמונת הדוגמה נמחקה",
|
||||
"deleteFailed": "מחיקת תמונת הדוגמה נכשלה",
|
||||
"setPreviewFailed": "הגדרת תמונת התצוגה המקדימה נכשלה"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0バイト",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "例画像",
|
||||
"misc": "その他",
|
||||
"metadataArchive": "メタデータアーカイブデータベース",
|
||||
"proxySettings": "プロキシ設定"
|
||||
"proxySettings": "プロキシ設定",
|
||||
"priorityTags": "優先タグ"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "NSFWコンテンツをぼかす",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "1行に表示するカード数を選択:",
|
||||
"displayDensityDetails": {
|
||||
"default": "デフォルト:5(1080p)、6(2K)、8(4K)",
|
||||
"medium": "中:6(1080p)、7(2K)、9(4K)",
|
||||
"compact": "コンパクト:7(1080p)、8(2K)、10(4K)"
|
||||
"default": "5(1080p)、6(2K)、8(4K)",
|
||||
"medium": "6(1080p)、7(2K)、9(4K)",
|
||||
"compact": "7(1080p)、8(2K)、10(4K)"
|
||||
},
|
||||
"displayDensityWarning": "警告:高密度設定は、リソースが限られたシステムでパフォーマンスの問題を引き起こす可能性があります。",
|
||||
"cardInfoDisplay": "カード情報表示",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "モデル情報とアクションボタンの表示タイミングを選択:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "常に表示:ヘッダーとフッターが常に表示されます",
|
||||
"hover": "ホバー時に表示:カードにホバーしたときのみヘッダーとフッターが表示されます"
|
||||
"always": "ヘッダーとフッターが常に表示されます",
|
||||
"hover": "カードにホバーしたときのみヘッダーとフッターが表示されます"
|
||||
},
|
||||
"modelNameDisplay": "モデル名表示",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "モデル名",
|
||||
"fileName": "ファイル名"
|
||||
},
|
||||
"modelNameDisplayHelp": "モデルカードのフッターに表示する内容を選択:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "モデルの説明的な名前を表示",
|
||||
"fileName": "ディスク上の実際のファイル名を表示"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "パスワード(任意)",
|
||||
"proxyPasswordPlaceholder": "パスワード",
|
||||
"proxyPasswordHelp": "プロキシ認証用のパスワード(必要な場合)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "優先タグ",
|
||||
"description": "各モデルタイプのタグ優先順位をカスタマイズします (例: character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "優先タグのヘルプを開く",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "チェックポイント",
|
||||
"embedding": "埋め込み"
|
||||
},
|
||||
"saveSuccess": "優先タグを更新しました。",
|
||||
"saveError": "優先タグの更新に失敗しました。",
|
||||
"loadingSuggestions": "候補を読み込み中...",
|
||||
"validation": {
|
||||
"missingClosingParen": "エントリ {index} に閉じ括弧がありません。",
|
||||
"missingCanonical": "エントリ {index} には正規タグ名を含める必要があります。",
|
||||
"duplicateCanonical": "正規タグ \"{tag}\" が複数回登場しています。",
|
||||
"unknown": "無効な優先タグ設定です。"
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Embeddingモデル"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "モデルルート",
|
||||
"modelRoot": "ルート",
|
||||
"collapseAll": "すべてのフォルダを折りたたむ",
|
||||
"pinSidebar": "サイドバーを固定",
|
||||
"unpinSidebar": "サイドバーの固定を解除",
|
||||
"switchToListView": "リストビューに切り替え",
|
||||
"switchToTreeView": "ツリービューに切り替え",
|
||||
"collapseAllDisabled": "リストビューでは利用できません"
|
||||
"switchToTreeView": "ツリー表示に切り替え",
|
||||
"recursiveOn": "サブフォルダーを検索",
|
||||
"recursiveOff": "現在のフォルダーのみを検索",
|
||||
"recursiveUnavailable": "再帰検索はツリービューでのみ利用できます",
|
||||
"collapseAllDisabled": "リストビューでは利用できません",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "移動先のパスを特定できません。"
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "統計",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "プレビュー画像をダウンロードしました",
|
||||
"downloadingFile": "{type}ファイルをダウンロード中",
|
||||
"finalizing": "ダウンロードを完了中..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "現在のファイル:",
|
||||
"downloading": "ダウンロード中: {name}",
|
||||
"transferred": "ダウンロード済み: {downloaded} / {total}",
|
||||
"transferredSimple": "ダウンロード済み: {downloaded}",
|
||||
"transferredUnknown": "ダウンロード済み: --",
|
||||
"speed": "速度: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "ダウンロードの一時停止に失敗しました:{error}",
|
||||
"downloadResumed": "ダウンロードが再開されました",
|
||||
"resumeFailed": "ダウンロードの再開に失敗しました:{error}",
|
||||
"downloadStopped": "ダウンロードをキャンセルしました",
|
||||
"stopFailed": "ダウンロードのキャンセルに失敗しました:{error}",
|
||||
"deleted": "例画像が削除されました",
|
||||
"deleteFailed": "例画像の削除に失敗しました",
|
||||
"setPreviewFailed": "プレビュー画像の設定に失敗しました"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 바이트",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "예시 이미지",
|
||||
"misc": "기타",
|
||||
"metadataArchive": "메타데이터 아카이브 데이터베이스",
|
||||
"proxySettings": "프록시 설정"
|
||||
"proxySettings": "프록시 설정",
|
||||
"priorityTags": "우선순위 태그"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "NSFW 콘텐츠 블러 처리",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "한 줄에 표시할 카드 수를 선택하세요:",
|
||||
"displayDensityDetails": {
|
||||
"default": "기본: 5개 (1080p), 6개 (2K), 8개 (4K)",
|
||||
"medium": "중간: 6개 (1080p), 7개 (2K), 9개 (4K)",
|
||||
"compact": "조밀: 7개 (1080p), 8개 (2K), 10개 (4K)"
|
||||
"default": "5개 (1080p), 6개 (2K), 8개 (4K)",
|
||||
"medium": "6개 (1080p), 7개 (2K), 9개 (4K)",
|
||||
"compact": "7개 (1080p), 8개 (2K), 10개 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "경고: 높은 밀도는 리소스가 제한된 시스템에서 성능 문제를 일으킬 수 있습니다.",
|
||||
"cardInfoDisplay": "카드 정보 표시",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "모델 정보 및 액션 버튼을 언제 표시할지 선택하세요:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "항상 표시: 헤더와 푸터가 항상 보입니다",
|
||||
"hover": "호버 시 표시: 카드에 마우스를 올렸을 때만 헤더와 푸터가 나타납니다"
|
||||
"always": "헤더와 푸터가 항상 보입니다",
|
||||
"hover": "카드에 마우스를 올렸을 때만 헤더와 푸터가 나타납니다"
|
||||
},
|
||||
"modelNameDisplay": "모델명 표시",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "모델명",
|
||||
"fileName": "파일명"
|
||||
},
|
||||
"modelNameDisplayHelp": "모델 카드 하단에 표시할 내용을 선택하세요:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "모델의 설명적 이름 표시",
|
||||
"fileName": "디스크의 실제 파일명 표시"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "비밀번호 (선택사항)",
|
||||
"proxyPasswordPlaceholder": "password",
|
||||
"proxyPasswordHelp": "프록시 인증에 필요한 비밀번호 (필요한 경우)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "우선순위 태그",
|
||||
"description": "모델 유형별 태그 우선순위를 사용자 지정합니다(예: character, concept, style(toon|toon_style)).",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "우선순위 태그 도움말 열기",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "체크포인트",
|
||||
"embedding": "임베딩"
|
||||
},
|
||||
"saveSuccess": "우선순위 태그가 업데이트되었습니다.",
|
||||
"saveError": "우선순위 태그를 업데이트하지 못했습니다.",
|
||||
"loadingSuggestions": "추천을 불러오는 중...",
|
||||
"validation": {
|
||||
"missingClosingParen": "{index}번째 항목에 닫는 괄호가 없습니다.",
|
||||
"missingCanonical": "{index}번째 항목에는 정식 태그 이름이 포함되어야 합니다.",
|
||||
"duplicateCanonical": "정식 태그 \"{tag}\"가 여러 번 나타납니다.",
|
||||
"unknown": "잘못된 우선순위 태그 구성입니다."
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Embedding 모델"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "모델 루트",
|
||||
"modelRoot": "루트",
|
||||
"collapseAll": "모든 폴더 접기",
|
||||
"pinSidebar": "사이드바 고정",
|
||||
"unpinSidebar": "사이드바 고정 해제",
|
||||
"switchToListView": "목록 보기로 전환",
|
||||
"switchToTreeView": "트리 보기로 전환",
|
||||
"collapseAllDisabled": "목록 보기에서는 사용할 수 없습니다"
|
||||
"recursiveOn": "하위 폴더 검색",
|
||||
"recursiveOff": "현재 폴더만 검색",
|
||||
"recursiveUnavailable": "재귀 검색은 트리 보기에서만 사용할 수 있습니다",
|
||||
"collapseAllDisabled": "목록 보기에서는 사용할 수 없습니다",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "이동할 대상 경로를 확인할 수 없습니다."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "통계",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "미리보기 이미지 다운로드됨",
|
||||
"downloadingFile": "{type} 파일 다운로드 중",
|
||||
"finalizing": "다운로드 완료 중..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "현재 파일:",
|
||||
"downloading": "다운로드 중: {name}",
|
||||
"transferred": "다운로드됨: {downloaded} / {total}",
|
||||
"transferredSimple": "다운로드됨: {downloaded}",
|
||||
"transferredUnknown": "다운로드됨: --",
|
||||
"speed": "속도: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "다운로드 일시정지 실패: {error}",
|
||||
"downloadResumed": "다운로드가 재개되었습니다",
|
||||
"resumeFailed": "다운로드 재개 실패: {error}",
|
||||
"downloadStopped": "다운로드가 취소되었습니다",
|
||||
"stopFailed": "다운로드 취소 실패: {error}",
|
||||
"deleted": "예시 이미지가 삭제되었습니다",
|
||||
"deleteFailed": "예시 이미지 삭제 실패",
|
||||
"setPreviewFailed": "미리보기 이미지 설정 실패"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 Байт",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "Примеры изображений",
|
||||
"misc": "Разное",
|
||||
"metadataArchive": "Архив метаданных",
|
||||
"proxySettings": "Настройки прокси"
|
||||
"proxySettings": "Настройки прокси",
|
||||
"priorityTags": "Приоритетные теги"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "Размывать NSFW контент",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "Выберите количество карточек для отображения в ряду:",
|
||||
"displayDensityDetails": {
|
||||
"default": "По умолчанию: 5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "Средняя: 6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "Компактная: 7 (1080p), 8 (2K), 10 (4K)"
|
||||
"default": "5 (1080p), 6 (2K), 8 (4K)",
|
||||
"medium": "6 (1080p), 7 (2K), 9 (4K)",
|
||||
"compact": "7 (1080p), 8 (2K), 10 (4K)"
|
||||
},
|
||||
"displayDensityWarning": "Предупреждение: Высокая плотность может вызвать проблемы с производительностью на системах с ограниченными ресурсами.",
|
||||
"cardInfoDisplay": "Отображение информации карточки",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "Выберите когда отображать информацию о модели и кнопки действий:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "Всегда видимо: Заголовки и подписи всегда видны",
|
||||
"hover": "Показать при наведении: Заголовки и подписи появляются только при наведении на карточку"
|
||||
"always": "Заголовки и подписи всегда видны",
|
||||
"hover": "Заголовки и подписи появляются только при наведении на карточку"
|
||||
},
|
||||
"modelNameDisplay": "Отображение названия модели",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "Название модели",
|
||||
"fileName": "Имя файла"
|
||||
},
|
||||
"modelNameDisplayHelp": "Выберите, что отображать в нижней части карточки модели:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "Отображать описательное название модели",
|
||||
"fileName": "Отображать фактическое имя файла на диске"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "Пароль (необязательно)",
|
||||
"proxyPasswordPlaceholder": "пароль",
|
||||
"proxyPasswordHelp": "Пароль для аутентификации на прокси (если требуется)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "Приоритетные теги",
|
||||
"description": "Настройте порядок приоритетов тегов для каждого типа моделей (например, character, concept, style(toon|toon_style)).",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "Открыть справку по приоритетным тегам",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Чекпойнт",
|
||||
"embedding": "Эмбеддинг"
|
||||
},
|
||||
"saveSuccess": "Приоритетные теги обновлены.",
|
||||
"saveError": "Не удалось обновить приоритетные теги.",
|
||||
"loadingSuggestions": "Загрузка подсказок...",
|
||||
"validation": {
|
||||
"missingClosingParen": "В записи {index} отсутствует закрывающая скобка.",
|
||||
"missingCanonical": "Запись {index} должна содержать каноническое имя тега.",
|
||||
"duplicateCanonical": "Канонический тег \"{tag}\" встречается более одного раза.",
|
||||
"unknown": "Недопустимая конфигурация приоритетных тегов."
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Модели Embedding"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Корень моделей",
|
||||
"modelRoot": "Корень",
|
||||
"collapseAll": "Свернуть все папки",
|
||||
"pinSidebar": "Закрепить боковую панель",
|
||||
"unpinSidebar": "Открепить боковую панель",
|
||||
"switchToListView": "Переключить на вид списка",
|
||||
"switchToTreeView": "Переключить на древовидный вид",
|
||||
"collapseAllDisabled": "Недоступно в виде списка"
|
||||
"recursiveOn": "Искать во вложенных папках",
|
||||
"recursiveOff": "Искать только в текущей папке",
|
||||
"recursiveUnavailable": "Рекурсивный поиск доступен только в режиме дерева",
|
||||
"collapseAllDisabled": "Недоступно в виде списка",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "Не удалось определить путь назначения для перемещения."
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "Статистика",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "Превью изображение загружено",
|
||||
"downloadingFile": "Загрузка файла {type}",
|
||||
"finalizing": "Завершение загрузки..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "Текущий файл:",
|
||||
"downloading": "Скачивается: {name}",
|
||||
"transferred": "Скачано: {downloaded} / {total}",
|
||||
"transferredSimple": "Скачано: {downloaded}",
|
||||
"transferredUnknown": "Скачано: --",
|
||||
"speed": "Скорость: {speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "Не удалось приостановить загрузку: {error}",
|
||||
"downloadResumed": "Загрузка возобновлена",
|
||||
"resumeFailed": "Не удалось возобновить загрузку: {error}",
|
||||
"downloadStopped": "Загрузка отменена",
|
||||
"stopFailed": "Не удалось отменить загрузку: {error}",
|
||||
"deleted": "Пример изображения удален",
|
||||
"deleteFailed": "Не удалось удалить пример изображения",
|
||||
"setPreviewFailed": "Не удалось установить превью изображение"
|
||||
|
||||
@@ -26,19 +26,13 @@
|
||||
"english": "English",
|
||||
"chinese_simplified": "中文(简体)",
|
||||
"chinese_traditional": "中文(繁体)",
|
||||
"russian": "俄语",
|
||||
"german": "德语",
|
||||
"japanese": "日语",
|
||||
"korean": "韩语",
|
||||
"french": "法语",
|
||||
"spanish": "西班牙语",
|
||||
"Hebrew": "עברית",
|
||||
"russian": "Русский",
|
||||
"german": "Deutsch",
|
||||
"japanese": "日本語",
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español"
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 字节",
|
||||
@@ -209,7 +203,8 @@
|
||||
"exampleImages": "示例图片",
|
||||
"misc": "其他",
|
||||
"metadataArchive": "元数据归档数据库",
|
||||
"proxySettings": "代理设置"
|
||||
"proxySettings": "代理设置",
|
||||
"priorityTags": "优先标签"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "模糊 NSFW 内容",
|
||||
@@ -230,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "选择每行显示卡片数量:",
|
||||
"displayDensityDetails": {
|
||||
"default": "默认:5(1080p),6(2K),8(4K)",
|
||||
"medium": "中等:6(1080p),7(2K),9(4K)",
|
||||
"compact": "紧凑:7(1080p),8(2K),10(4K)"
|
||||
"default": "5(1080p),6(2K),8(4K)",
|
||||
"medium": "6(1080p),7(2K),9(4K)",
|
||||
"compact": "7(1080p),8(2K),10(4K)"
|
||||
},
|
||||
"displayDensityWarning": "警告:高密度可能导致资源有限的系统性能下降。",
|
||||
"cardInfoDisplay": "卡片信息显示",
|
||||
@@ -242,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "选择何时显示模型信息和操作按钮:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "始终可见:标题和底部始终显示",
|
||||
"hover": "悬停时显示:仅在悬停卡片时显示标题和底部"
|
||||
"always": "标题和底部始终显示",
|
||||
"hover": "仅在悬停卡片时显示标题和底部"
|
||||
},
|
||||
"modelNameDisplay": "模型名称显示",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "模型名称",
|
||||
"fileName": "文件名"
|
||||
},
|
||||
"modelNameDisplayHelp": "选择在模型卡片底部显示的内容:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "显示模型的描述性名称",
|
||||
"fileName": "显示磁盘上的实际文件名"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -351,6 +356,26 @@
|
||||
"proxyPassword": "密码 (可选)",
|
||||
"proxyPasswordPlaceholder": "密码",
|
||||
"proxyPasswordHelp": "代理认证的密码 (如果需要)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "优先标签",
|
||||
"description": "为每种模型类型自定义标签优先级顺序 (例如: character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "打开优先标签帮助",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "优先标签已更新。",
|
||||
"saveError": "优先标签更新失败。",
|
||||
"loadingSuggestions": "正在加载建议...",
|
||||
"validation": {
|
||||
"missingClosingParen": "条目 {index} 缺少右括号。",
|
||||
"missingCanonical": "条目 {index} 必须包含规范标签名称。",
|
||||
"duplicateCanonical": "规范标签 \"{tag}\" 出现多次。",
|
||||
"unknown": "优先标签配置无效。"
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -397,14 +422,14 @@
|
||||
"selected": "已选中 {count} 项",
|
||||
"selectedSuffix": "已选中",
|
||||
"viewSelected": "查看已选中",
|
||||
"addTags": "为所有添加标签",
|
||||
"setBaseModel": "为所有设置基础模型",
|
||||
"setContentRating": "为全部设置内容评级",
|
||||
"copyAll": "复制全部语法",
|
||||
"refreshAll": "刷新全部元数据",
|
||||
"moveAll": "全部移动到文件夹",
|
||||
"addTags": "为所选中添加标签",
|
||||
"setBaseModel": "为所选中设置基础模型",
|
||||
"setContentRating": "为所选中设置内容评级",
|
||||
"copyAll": "复制所选中语法",
|
||||
"refreshAll": "刷新所选中元数据",
|
||||
"moveAll": "移动所选中到文件夹",
|
||||
"autoOrganize": "自动整理所选模型",
|
||||
"deleteAll": "删除所有模型",
|
||||
"deleteAll": "删除选中模型",
|
||||
"clear": "清除选择",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "正在初始化自动整理...",
|
||||
@@ -535,13 +560,19 @@
|
||||
"title": "Embedding 模型"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "模型根目录",
|
||||
"modelRoot": "根目录",
|
||||
"collapseAll": "折叠所有文件夹",
|
||||
"pinSidebar": "固定侧边栏",
|
||||
"unpinSidebar": "取消固定侧边栏",
|
||||
"switchToListView": "切换到列表视图",
|
||||
"switchToTreeView": "切换到树状视图",
|
||||
"collapseAllDisabled": "列表视图下不可用"
|
||||
"recursiveOn": "搜索子文件夹",
|
||||
"recursiveOff": "仅搜索当前文件夹",
|
||||
"recursiveUnavailable": "仅在树形视图中可使用递归搜索",
|
||||
"collapseAllDisabled": "列表视图下不可用",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "无法确定移动的目标路径。"
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "统计",
|
||||
@@ -616,6 +647,14 @@
|
||||
"downloadedPreview": "预览图片已下载",
|
||||
"downloadingFile": "正在下载 {type} 文件",
|
||||
"finalizing": "正在完成下载..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "当前文件:",
|
||||
"downloading": "下载中:{name}",
|
||||
"transferred": "已下载:{downloaded} / {total}",
|
||||
"transferredSimple": "已下载:{downloaded}",
|
||||
"transferredUnknown": "已下载:--",
|
||||
"speed": "速度:{speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1216,6 +1255,8 @@
|
||||
"pauseFailed": "暂停下载失败:{error}",
|
||||
"downloadResumed": "下载已恢复",
|
||||
"resumeFailed": "恢复下载失败:{error}",
|
||||
"downloadStopped": "下载已取消",
|
||||
"stopFailed": "取消下载失败:{error}",
|
||||
"deleted": "示例图片已删除",
|
||||
"deleteFailed": "删除示例图片失败",
|
||||
"setPreviewFailed": "设置预览图片失败"
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
"korean": "한국어",
|
||||
"french": "Français",
|
||||
"spanish": "Español",
|
||||
"Hebrew": "עברית"
|
||||
"Hebrew": "עברית"
|
||||
},
|
||||
"fileSize": {
|
||||
"zero": "0 位元組",
|
||||
@@ -203,7 +203,8 @@
|
||||
"exampleImages": "範例圖片",
|
||||
"misc": "其他",
|
||||
"metadataArchive": "中繼資料封存資料庫",
|
||||
"proxySettings": "代理設定"
|
||||
"proxySettings": "代理設定",
|
||||
"priorityTags": "優先標籤"
|
||||
},
|
||||
"contentFiltering": {
|
||||
"blurNsfwContent": "模糊 NSFW 內容",
|
||||
@@ -224,9 +225,9 @@
|
||||
},
|
||||
"displayDensityHelp": "選擇每行顯示卡片數量:",
|
||||
"displayDensityDetails": {
|
||||
"default": "預設:5(1080p)、6(2K)、8(4K)",
|
||||
"medium": "中等:6(1080p)、7(2K)、9(4K)",
|
||||
"compact": "緊湊:7(1080p)、8(2K)、10(4K)"
|
||||
"default": "5(1080p)、6(2K)、8(4K)",
|
||||
"medium": "6(1080p)、7(2K)、9(4K)",
|
||||
"compact": "7(1080p)、8(2K)、10(4K)"
|
||||
},
|
||||
"displayDensityWarning": "警告:較高密度可能導致資源有限的系統效能下降。",
|
||||
"cardInfoDisplay": "卡片資訊顯示",
|
||||
@@ -236,8 +237,18 @@
|
||||
},
|
||||
"cardInfoDisplayHelp": "選擇何時顯示模型資訊與操作按鈕:",
|
||||
"cardInfoDisplayDetails": {
|
||||
"always": "永遠顯示:標題與頁腳始終可見",
|
||||
"hover": "滑鼠懸停顯示:標題與頁腳僅在滑鼠懸停時顯示"
|
||||
"always": "標題與頁腳始終可見",
|
||||
"hover": "標題與頁腳僅在滑鼠懸停時顯示"
|
||||
},
|
||||
"modelNameDisplay": "模型名稱顯示",
|
||||
"modelNameDisplayOptions": {
|
||||
"modelName": "模型名稱",
|
||||
"fileName": "檔案名稱"
|
||||
},
|
||||
"modelNameDisplayHelp": "選擇在模型卡片底部顯示的內容:",
|
||||
"modelNameDisplayDetails": {
|
||||
"modelName": "顯示模型的描述性名稱",
|
||||
"fileName": "顯示磁碟上的實際檔案名稱"
|
||||
}
|
||||
},
|
||||
"folderSettings": {
|
||||
@@ -345,6 +356,26 @@
|
||||
"proxyPassword": "密碼(選填)",
|
||||
"proxyPasswordPlaceholder": "password",
|
||||
"proxyPasswordHelp": "代理驗證所需的密碼(如有需要)"
|
||||
},
|
||||
"priorityTags": {
|
||||
"title": "優先標籤",
|
||||
"description": "為每種模型類型自訂標籤的優先順序 (例如: character, concept, style(toon|toon_style))",
|
||||
"placeholder": "character, concept, style(toon|toon_style)",
|
||||
"helpLinkLabel": "開啟優先標籤說明",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA",
|
||||
"checkpoint": "Checkpoint",
|
||||
"embedding": "Embedding"
|
||||
},
|
||||
"saveSuccess": "優先標籤已更新。",
|
||||
"saveError": "更新優先標籤失敗。",
|
||||
"loadingSuggestions": "正在載入建議...",
|
||||
"validation": {
|
||||
"missingClosingParen": "項目 {index} 缺少右括號。",
|
||||
"missingCanonical": "項目 {index} 必須包含正規標籤名稱。",
|
||||
"duplicateCanonical": "正規標籤 \"{tag}\" 出現多於一次。",
|
||||
"unknown": "優先標籤設定無效。"
|
||||
}
|
||||
}
|
||||
},
|
||||
"loras": {
|
||||
@@ -529,13 +560,19 @@
|
||||
"title": "Embedding 模型"
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "模型根目錄",
|
||||
"modelRoot": "根目錄",
|
||||
"collapseAll": "全部摺疊資料夾",
|
||||
"pinSidebar": "固定側邊欄",
|
||||
"unpinSidebar": "取消固定側邊欄",
|
||||
"switchToListView": "切換至列表檢視",
|
||||
"switchToTreeView": "切換至樹狀檢視",
|
||||
"collapseAllDisabled": "列表檢視下不可用"
|
||||
"switchToTreeView": "切換到樹狀檢視",
|
||||
"recursiveOn": "搜尋子資料夾",
|
||||
"recursiveOff": "僅搜尋目前資料夾",
|
||||
"recursiveUnavailable": "遞迴搜尋僅能在樹狀檢視中使用",
|
||||
"collapseAllDisabled": "列表檢視下不可用",
|
||||
"dragDrop": {
|
||||
"unableToResolveRoot": "無法確定移動的目標路徑。"
|
||||
}
|
||||
},
|
||||
"statistics": {
|
||||
"title": "統計",
|
||||
@@ -610,6 +647,14 @@
|
||||
"downloadedPreview": "已下載預覽圖片",
|
||||
"downloadingFile": "正在下載 {type} 檔案",
|
||||
"finalizing": "完成下載中..."
|
||||
},
|
||||
"progress": {
|
||||
"currentFile": "目前檔案:",
|
||||
"downloading": "下載中:{name}",
|
||||
"transferred": "已下載:{downloaded} / {total}",
|
||||
"transferredSimple": "已下載:{downloaded}",
|
||||
"transferredUnknown": "已下載:--",
|
||||
"speed": "速度:{speed}"
|
||||
}
|
||||
},
|
||||
"move": {
|
||||
@@ -1210,6 +1255,8 @@
|
||||
"pauseFailed": "暫停下載失敗:{error}",
|
||||
"downloadResumed": "下載已恢復",
|
||||
"resumeFailed": "恢復下載失敗:{error}",
|
||||
"downloadStopped": "下載已取消",
|
||||
"stopFailed": "取消下載失敗:{error}",
|
||||
"deleted": "範例圖片已刪除",
|
||||
"deleteFailed": "刪除範例圖片失敗",
|
||||
"setPreviewFailed": "設定預覽圖片失敗"
|
||||
|
||||
@@ -74,8 +74,9 @@ class Config:
|
||||
"""Persist ComfyUI-derived folder paths to the multi-library settings."""
|
||||
try:
|
||||
ensure_settings_file(logger)
|
||||
from .services.settings_manager import settings as settings_service
|
||||
from .services.settings_manager import get_settings_manager
|
||||
|
||||
settings_service = get_settings_manager()
|
||||
libraries = settings_service.get_libraries()
|
||||
comfy_library = libraries.get("comfyui", {})
|
||||
default_library = libraries.get("default", {})
|
||||
@@ -442,8 +443,9 @@ class Config:
|
||||
"""Return the current library registry and active library name."""
|
||||
|
||||
try:
|
||||
from .services.settings_manager import settings as settings_service
|
||||
from .services.settings_manager import get_settings_manager
|
||||
|
||||
settings_service = get_settings_manager()
|
||||
libraries = settings_service.get_libraries()
|
||||
active_library = settings_service.get_active_library_name()
|
||||
return {
|
||||
|
||||
@@ -13,7 +13,7 @@ from .routes.misc_routes import MiscRoutes
|
||||
from .routes.preview_routes import PreviewRoutes
|
||||
from .routes.example_images_routes import ExampleImagesRoutes
|
||||
from .services.service_registry import ServiceRegistry
|
||||
from .services.settings_manager import settings
|
||||
from .services.settings_manager import get_settings_manager
|
||||
from .utils.example_images_migration import ExampleImagesMigration
|
||||
from .services.websocket_manager import ws_manager
|
||||
from .services.example_images_cleanup_service import ExampleImagesCleanupService
|
||||
@@ -23,6 +23,25 @@ logger = logging.getLogger(__name__)
|
||||
# Check if we're in standalone mode
|
||||
STANDALONE_MODE = 'nodes' not in sys.modules
|
||||
|
||||
|
||||
class _SettingsProxy:
|
||||
def __init__(self):
|
||||
self._manager = None
|
||||
|
||||
def _resolve(self):
|
||||
if self._manager is None:
|
||||
self._manager = get_settings_manager()
|
||||
return self._manager
|
||||
|
||||
def get(self, *args, **kwargs):
|
||||
return self._resolve().get(*args, **kwargs)
|
||||
|
||||
def __getattr__(self, item):
|
||||
return getattr(self._resolve(), item)
|
||||
|
||||
|
||||
settings = _SettingsProxy()
|
||||
|
||||
class LoraManager:
|
||||
"""Main entry point for LoRA Manager plugin"""
|
||||
|
||||
|
||||
@@ -666,6 +666,7 @@ NODE_EXTRACTORS = {
|
||||
"LoraManagerLoader": LoraLoaderManagerExtractor,
|
||||
# Conditioning
|
||||
"CLIPTextEncode": CLIPTextEncodeExtractor,
|
||||
"PromptLoraManager": CLIPTextEncodeExtractor,
|
||||
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
|
||||
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
|
||||
"AdvancedCLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import logging
|
||||
import re
|
||||
from nodes import LoraLoader
|
||||
from comfy.comfy_types import IO # type: ignore
|
||||
from ..utils.utils import get_lora_info
|
||||
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list, nunchaku_load_lora
|
||||
|
||||
@@ -17,7 +16,7 @@ class LoraManagerLoader:
|
||||
"required": {
|
||||
"model": ("MODEL",),
|
||||
# "clip": ("CLIP",),
|
||||
"text": (IO.STRING, {
|
||||
"text": ("STRING", {
|
||||
"multiline": True,
|
||||
"pysssss.autocomplete": False,
|
||||
"dynamicPrompts": True,
|
||||
@@ -28,7 +27,7 @@ class LoraManagerLoader:
|
||||
"optional": FlexibleOptionalInputType(any_type),
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CLIP", IO.STRING, IO.STRING)
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
|
||||
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
|
||||
FUNCTION = "load_loras"
|
||||
|
||||
@@ -141,7 +140,7 @@ class LoraManagerTextLoader:
|
||||
return {
|
||||
"required": {
|
||||
"model": ("MODEL",),
|
||||
"lora_syntax": (IO.STRING, {
|
||||
"lora_syntax": ("STRING", {
|
||||
"defaultInput": True,
|
||||
"forceInput": True,
|
||||
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation"
|
||||
@@ -153,7 +152,7 @@ class LoraManagerTextLoader:
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CLIP", IO.STRING, IO.STRING)
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
|
||||
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
|
||||
FUNCTION = "load_loras_from_text"
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
from comfy.comfy_types import IO # type: ignore
|
||||
import os
|
||||
from ..utils.utils import get_lora_info
|
||||
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list
|
||||
@@ -15,7 +14,7 @@ class LoraStacker:
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"text": (IO.STRING, {
|
||||
"text": ("STRING", {
|
||||
"multiline": True,
|
||||
"pysssss.autocomplete": False,
|
||||
"dynamicPrompts": True,
|
||||
@@ -26,7 +25,7 @@ class LoraStacker:
|
||||
"optional": FlexibleOptionalInputType(any_type),
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("LORA_STACK", IO.STRING, IO.STRING)
|
||||
RETURN_TYPES = ("LORA_STACK", "STRING", "STRING")
|
||||
RETURN_NAMES = ("LORA_STACK", "trigger_words", "active_loras")
|
||||
FUNCTION = "stack_loras"
|
||||
|
||||
|
||||
59
py/nodes/prompt.py
Normal file
59
py/nodes/prompt.py
Normal file
@@ -0,0 +1,59 @@
|
||||
from typing import Any, Optional
|
||||
|
||||
class PromptLoraManager:
|
||||
"""Encodes text (and optional trigger words) into CLIP conditioning."""
|
||||
|
||||
NAME = "Prompt (LoraManager)"
|
||||
CATEGORY = "Lora Manager/conditioning"
|
||||
DESCRIPTION = (
|
||||
"Encodes a text prompt using a CLIP model into an embedding that can be used "
|
||||
"to guide the diffusion model towards generating specific images."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"text": (
|
||||
'STRING',
|
||||
{
|
||||
"multiline": True,
|
||||
"pysssss.autocomplete": False,
|
||||
"dynamicPrompts": True,
|
||||
"tooltip": "The text to be encoded.",
|
||||
},
|
||||
),
|
||||
"clip": (
|
||||
'CLIP',
|
||||
{"tooltip": "The CLIP model used for encoding the text."},
|
||||
),
|
||||
},
|
||||
"optional": {
|
||||
"trigger_words": (
|
||||
'STRING',
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": (
|
||||
"Optional trigger words to prepend to the text before "
|
||||
"encoding."
|
||||
)
|
||||
},
|
||||
)
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ('CONDITIONING', 'STRING',)
|
||||
RETURN_NAMES = ('CONDITIONING', 'PROMPT',)
|
||||
OUTPUT_TOOLTIPS = (
|
||||
"A conditioning containing the embedded text used to guide the diffusion model.",
|
||||
)
|
||||
FUNCTION = "encode"
|
||||
|
||||
def encode(self, text: str, clip: Any, trigger_words: Optional[str] = None):
|
||||
prompt = text
|
||||
if trigger_words:
|
||||
prompt = ", ".join([trigger_words, text])
|
||||
|
||||
from nodes import CLIPTextEncode # type: ignore
|
||||
conditioning = CLIPTextEncode().encode(clip, prompt)[0]
|
||||
return (conditioning, prompt,)
|
||||
@@ -1,6 +1,5 @@
|
||||
import json
|
||||
import re
|
||||
from server import PromptServer # type: ignore
|
||||
from .utils import FlexibleOptionalInputType, any_type
|
||||
import logging
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
from comfy.comfy_types import IO # type: ignore
|
||||
import folder_paths # type: ignore
|
||||
from ..utils.utils import get_lora_info
|
||||
from .utils import FlexibleOptionalInputType, any_type, get_loras_list
|
||||
@@ -16,7 +15,7 @@ class WanVideoLoraSelect:
|
||||
"required": {
|
||||
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load LORA models with less VRAM usage, slower loading. This affects ALL LoRAs, not just the current ones. No effect if merge_loras is False"}),
|
||||
"merge_loras": ("BOOLEAN", {"default": True, "tooltip": "Merge LoRAs into the model, otherwise they are loaded on the fly. Always disabled for GGUF and scaled fp8 models. This affects ALL LoRAs, not just the current one"}),
|
||||
"text": (IO.STRING, {
|
||||
"text": ("STRING", {
|
||||
"multiline": True,
|
||||
"pysssss.autocomplete": False,
|
||||
"dynamicPrompts": True,
|
||||
@@ -27,7 +26,7 @@ class WanVideoLoraSelect:
|
||||
"optional": FlexibleOptionalInputType(any_type),
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("WANVIDLORA", IO.STRING, IO.STRING)
|
||||
RETURN_TYPES = ("WANVIDLORA", "STRING", "STRING")
|
||||
RETURN_NAMES = ("lora", "trigger_words", "active_loras")
|
||||
FUNCTION = "process_loras"
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
from comfy.comfy_types import IO
|
||||
import folder_paths
|
||||
import folder_paths # type: ignore
|
||||
from ..utils.utils import get_lora_info
|
||||
from .utils import any_type
|
||||
import logging
|
||||
@@ -20,7 +19,7 @@ class WanVideoLoraSelectFromText:
|
||||
"required": {
|
||||
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load LORA models with less VRAM usage, slower loading. This affects ALL LoRAs, not just the current ones. No effect if merge_loras is False"}),
|
||||
"merge_lora": ("BOOLEAN", {"default": True, "tooltip": "Merge LoRAs into the model, otherwise they are loaded on the fly. Always disabled for GGUF and scaled fp8 models. This affects ALL LoRAs, not just the current one"}),
|
||||
"lora_syntax": (IO.STRING, {
|
||||
"lora_syntax": ("STRING", {
|
||||
"multiline": True,
|
||||
"defaultInput": True,
|
||||
"forceInput": True,
|
||||
@@ -34,7 +33,7 @@ class WanVideoLoraSelectFromText:
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("WANVIDLORA", IO.STRING, IO.STRING)
|
||||
RETURN_TYPES = ("WANVIDLORA", "STRING", "STRING")
|
||||
RETURN_NAMES = ("lora", "trigger_words", "active_loras")
|
||||
|
||||
FUNCTION = "process_loras_from_syntax"
|
||||
|
||||
@@ -2,7 +2,7 @@ from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Callable, Dict, Mapping
|
||||
from typing import TYPE_CHECKING, Callable, Dict, Mapping
|
||||
|
||||
import jinja2
|
||||
from aiohttp import web
|
||||
@@ -17,7 +17,7 @@ from ..services.model_lifecycle_service import ModelLifecycleService
|
||||
from ..services.preview_asset_service import PreviewAssetService
|
||||
from ..services.server_i18n import server_i18n as default_server_i18n
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..services.settings_manager import settings as default_settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..services.tag_update_service import TagUpdateService
|
||||
from ..services.websocket_manager import ws_manager as default_ws_manager
|
||||
from ..services.use_cases import (
|
||||
@@ -42,8 +42,12 @@ from .handlers.model_handlers import (
|
||||
ModelMoveHandler,
|
||||
ModelPageView,
|
||||
ModelQueryHandler,
|
||||
ModelUpdateHandler,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ..services.model_update_service import ModelUpdateService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -56,14 +60,14 @@ class BaseModelRoutes(ABC):
|
||||
self,
|
||||
service=None,
|
||||
*,
|
||||
settings_service=default_settings,
|
||||
settings_service=None,
|
||||
ws_manager=default_ws_manager,
|
||||
server_i18n=default_server_i18n,
|
||||
metadata_provider_factory=get_default_metadata_provider,
|
||||
) -> None:
|
||||
self.service = None
|
||||
self.model_type = ""
|
||||
self._settings = settings_service
|
||||
self._settings = settings_service or get_settings_manager()
|
||||
self._ws_manager = ws_manager
|
||||
self._server_i18n = server_i18n
|
||||
self._metadata_provider_factory = metadata_provider_factory
|
||||
@@ -90,7 +94,7 @@ class BaseModelRoutes(ABC):
|
||||
self._metadata_sync_service = MetadataSyncService(
|
||||
metadata_manager=MetadataManager,
|
||||
preview_service=self._preview_service,
|
||||
settings=settings_service,
|
||||
settings=self._settings,
|
||||
default_metadata_provider_factory=metadata_provider_factory,
|
||||
metadata_provider_selector=get_metadata_provider,
|
||||
)
|
||||
@@ -99,10 +103,18 @@ class BaseModelRoutes(ABC):
|
||||
ws_manager=self._ws_manager,
|
||||
download_manager_factory=ServiceRegistry.get_download_manager,
|
||||
)
|
||||
self._model_update_service: ModelUpdateService | None = None
|
||||
|
||||
if service is not None:
|
||||
self.attach_service(service)
|
||||
|
||||
def set_model_update_service(self, service: "ModelUpdateService") -> None:
|
||||
"""Attach the model update tracking service."""
|
||||
|
||||
self._model_update_service = service
|
||||
self._handler_set = None
|
||||
self._handler_mapping = None
|
||||
|
||||
def attach_service(self, service) -> None:
|
||||
"""Attach a model service and rebuild handler dependencies."""
|
||||
self.service = service
|
||||
@@ -127,6 +139,7 @@ class BaseModelRoutes(ABC):
|
||||
|
||||
def _create_handler_set(self) -> ModelHandlerSet:
|
||||
service = self._ensure_service()
|
||||
update_service = self._ensure_model_update_service()
|
||||
page_view = ModelPageView(
|
||||
template_env=self.template_env,
|
||||
template_name=self.template_name or "",
|
||||
@@ -186,6 +199,12 @@ class BaseModelRoutes(ABC):
|
||||
ws_manager=self._ws_manager,
|
||||
logger=logger,
|
||||
)
|
||||
updates = ModelUpdateHandler(
|
||||
service=service,
|
||||
update_service=update_service,
|
||||
metadata_provider_selector=get_metadata_provider,
|
||||
logger=logger,
|
||||
)
|
||||
return ModelHandlerSet(
|
||||
page_view=page_view,
|
||||
listing=listing,
|
||||
@@ -195,6 +214,7 @@ class BaseModelRoutes(ABC):
|
||||
civitai=civitai,
|
||||
move=move,
|
||||
auto_organize=auto_organize,
|
||||
updates=updates,
|
||||
)
|
||||
|
||||
@property
|
||||
@@ -273,3 +293,8 @@ class BaseModelRoutes(ABC):
|
||||
|
||||
return proxy
|
||||
|
||||
def _ensure_model_update_service(self) -> "ModelUpdateService":
|
||||
if self._model_update_service is None:
|
||||
raise RuntimeError("Model update service has not been attached")
|
||||
return self._model_update_service
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ from ..services.recipes import (
|
||||
)
|
||||
from ..services.server_i18n import server_i18n
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH
|
||||
from ..utils.exif_utils import ExifUtils
|
||||
from .handlers.recipe_handlers import (
|
||||
@@ -48,7 +48,7 @@ class BaseRecipeRoutes:
|
||||
self.recipe_scanner = None
|
||||
self.lora_scanner = None
|
||||
self.civitai_client = None
|
||||
self.settings = settings
|
||||
self.settings = get_settings_manager()
|
||||
self.server_i18n = server_i18n
|
||||
self.template_env = jinja2.Environment(
|
||||
loader=jinja2.FileSystemLoader(config.templates_path),
|
||||
|
||||
@@ -20,8 +20,10 @@ class CheckpointRoutes(BaseModelRoutes):
|
||||
async def initialize_services(self):
|
||||
"""Initialize services from ServiceRegistry"""
|
||||
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
|
||||
self.service = CheckpointService(checkpoint_scanner)
|
||||
|
||||
update_service = await ServiceRegistry.get_model_update_service()
|
||||
self.service = CheckpointService(checkpoint_scanner, update_service=update_service)
|
||||
self.set_model_update_service(update_service)
|
||||
|
||||
# Attach service dependencies
|
||||
self.attach_service(self.service)
|
||||
|
||||
@@ -93,4 +95,4 @@ class CheckpointRoutes(BaseModelRoutes):
|
||||
return web.json_response({
|
||||
"success": False,
|
||||
"error": str(e)
|
||||
}, status=500)
|
||||
}, status=500)
|
||||
|
||||
@@ -19,8 +19,10 @@ class EmbeddingRoutes(BaseModelRoutes):
|
||||
async def initialize_services(self):
|
||||
"""Initialize services from ServiceRegistry"""
|
||||
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
self.service = EmbeddingService(embedding_scanner)
|
||||
|
||||
update_service = await ServiceRegistry.get_model_update_service()
|
||||
self.service = EmbeddingService(embedding_scanner, update_service=update_service)
|
||||
self.set_model_update_service(update_service)
|
||||
|
||||
# Attach service dependencies
|
||||
self.attach_service(self.service)
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("GET", "/api/lm/example-images-status", "get_example_images_status"),
|
||||
RouteDefinition("POST", "/api/lm/pause-example-images", "pause_example_images"),
|
||||
RouteDefinition("POST", "/api/lm/resume-example-images", "resume_example_images"),
|
||||
RouteDefinition("POST", "/api/lm/stop-example-images", "stop_example_images"),
|
||||
RouteDefinition("POST", "/api/lm/open-example-images-folder", "open_example_images_folder"),
|
||||
RouteDefinition("GET", "/api/lm/example-image-files", "get_example_image_files"),
|
||||
RouteDefinition("GET", "/api/lm/has-example-images", "has_example_images"),
|
||||
|
||||
@@ -68,6 +68,13 @@ class ExampleImagesDownloadHandler:
|
||||
except DownloadNotRunningError as exc:
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=400)
|
||||
|
||||
async def stop_example_images(self, request: web.Request) -> web.StreamResponse:
|
||||
try:
|
||||
result = await self._download_manager.stop_download(request)
|
||||
return web.json_response(result)
|
||||
except DownloadNotRunningError as exc:
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=400)
|
||||
|
||||
async def force_download_example_images(self, request: web.Request) -> web.StreamResponse:
|
||||
try:
|
||||
payload = await request.json()
|
||||
@@ -149,6 +156,7 @@ class ExampleImagesHandlerSet:
|
||||
"get_example_images_status": self.download.get_example_images_status,
|
||||
"pause_example_images": self.download.pause_example_images,
|
||||
"resume_example_images": self.download.resume_example_images,
|
||||
"stop_example_images": self.download.stop_example_images,
|
||||
"force_download_example_images": self.download.force_download_example_images,
|
||||
"import_example_images": self.management.import_example_images,
|
||||
"delete_example_image": self.management.delete_example_image,
|
||||
|
||||
@@ -24,10 +24,17 @@ from ...services.metadata_service import (
|
||||
update_metadata_providers,
|
||||
)
|
||||
from ...services.service_registry import ServiceRegistry
|
||||
from ...services.settings_manager import settings as default_settings
|
||||
from ...services.settings_manager import get_settings_manager
|
||||
from ...services.websocket_manager import ws_manager
|
||||
from ...services.downloader import get_downloader
|
||||
from ...utils.constants import DEFAULT_NODE_COLOR, NODE_TYPES, SUPPORTED_MEDIA_EXTENSIONS
|
||||
from ...utils.constants import (
|
||||
CIVITAI_USER_MODEL_TYPES,
|
||||
DEFAULT_NODE_COLOR,
|
||||
NODE_TYPES,
|
||||
SUPPORTED_MEDIA_EXTENSIONS,
|
||||
VALID_LORA_TYPES,
|
||||
)
|
||||
from ...utils.civitai_utils import rewrite_preview_url
|
||||
from ...utils.example_images_paths import is_valid_example_images_root
|
||||
from ...utils.lora_metadata import extract_trained_words
|
||||
from ...utils.usage_stats import UsageStats
|
||||
@@ -80,7 +87,7 @@ class NodeRegistry:
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._lock = asyncio.Lock()
|
||||
self._nodes: Dict[int, dict] = {}
|
||||
self._nodes: Dict[str, dict] = {}
|
||||
self._registry_updated = asyncio.Event()
|
||||
|
||||
async def register_nodes(self, nodes: list[dict]) -> None:
|
||||
@@ -88,11 +95,16 @@ class NodeRegistry:
|
||||
self._nodes.clear()
|
||||
for node in nodes:
|
||||
node_id = node["node_id"]
|
||||
graph_id = str(node["graph_id"])
|
||||
unique_id = f"{graph_id}:{node_id}"
|
||||
node_type = node.get("type", "")
|
||||
type_id = NODE_TYPES.get(node_type, 0)
|
||||
bgcolor = node.get("bgcolor") or DEFAULT_NODE_COLOR
|
||||
self._nodes[node_id] = {
|
||||
self._nodes[unique_id] = {
|
||||
"id": node_id,
|
||||
"graph_id": graph_id,
|
||||
"graph_name": node.get("graph_name"),
|
||||
"unique_id": unique_id,
|
||||
"bgcolor": bgcolor,
|
||||
"title": node.get("title"),
|
||||
"type": type_id,
|
||||
@@ -150,6 +162,8 @@ class SettingsHandler:
|
||||
"include_trigger_words",
|
||||
"show_only_sfw",
|
||||
"compact_mode",
|
||||
"priority_tags",
|
||||
"model_name_display",
|
||||
)
|
||||
|
||||
_PROXY_KEYS = {"proxy_enabled", "proxy_host", "proxy_port", "proxy_username", "proxy_password", "proxy_type"}
|
||||
@@ -157,11 +171,11 @@ class SettingsHandler:
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
settings_service=default_settings,
|
||||
settings_service=None,
|
||||
metadata_provider_updater: Callable[[], Awaitable[None]] = update_metadata_providers,
|
||||
downloader_factory: Callable[[], Awaitable[DownloaderProtocol]] = get_downloader,
|
||||
) -> None:
|
||||
self._settings = settings_service
|
||||
self._settings = settings_service or get_settings_manager()
|
||||
self._metadata_provider_updater = metadata_provider_updater
|
||||
self._downloader_factory = downloader_factory
|
||||
|
||||
@@ -195,6 +209,14 @@ class SettingsHandler:
|
||||
logger.error("Error getting settings: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def get_priority_tags(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
suggestions = self._settings.get_priority_tag_suggestions()
|
||||
return web.json_response({"success": True, "tags": suggestions})
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.error("Error getting priority tags: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def activate_library(self, request: web.Request) -> web.Response:
|
||||
"""Activate the selected library."""
|
||||
|
||||
@@ -330,16 +352,65 @@ class LoraCodeHandler:
|
||||
logger.error("Error broadcasting lora code: %s", exc)
|
||||
results.append({"node_id": "broadcast", "success": False, "error": str(exc)})
|
||||
else:
|
||||
for node_id in node_ids:
|
||||
for entry in node_ids:
|
||||
node_identifier = entry
|
||||
graph_identifier = None
|
||||
if isinstance(entry, dict):
|
||||
node_identifier = entry.get("node_id")
|
||||
graph_identifier = entry.get("graph_id")
|
||||
|
||||
if node_identifier is None:
|
||||
results.append(
|
||||
{
|
||||
"node_id": node_identifier,
|
||||
"graph_id": graph_identifier,
|
||||
"success": False,
|
||||
"error": "Missing node_id parameter",
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
parsed_node_id = int(node_identifier)
|
||||
except (TypeError, ValueError):
|
||||
parsed_node_id = node_identifier
|
||||
|
||||
payload = {
|
||||
"id": parsed_node_id,
|
||||
"lora_code": lora_code,
|
||||
"mode": mode,
|
||||
}
|
||||
|
||||
if graph_identifier is not None:
|
||||
payload["graph_id"] = str(graph_identifier)
|
||||
|
||||
try:
|
||||
self._prompt_server.instance.send_sync(
|
||||
"lora_code_update",
|
||||
{"id": node_id, "lora_code": lora_code, "mode": mode},
|
||||
payload,
|
||||
)
|
||||
results.append(
|
||||
{
|
||||
"node_id": parsed_node_id,
|
||||
"graph_id": payload.get("graph_id"),
|
||||
"success": True,
|
||||
}
|
||||
)
|
||||
results.append({"node_id": node_id, "success": True})
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.error("Error sending lora code to node %s: %s", node_id, exc)
|
||||
results.append({"node_id": node_id, "success": False, "error": str(exc)})
|
||||
logger.error(
|
||||
"Error sending lora code to node %s (graph %s): %s",
|
||||
parsed_node_id,
|
||||
graph_identifier,
|
||||
exc,
|
||||
)
|
||||
results.append(
|
||||
{
|
||||
"node_id": parsed_node_id,
|
||||
"graph_id": payload.get("graph_id"),
|
||||
"success": False,
|
||||
"error": str(exc),
|
||||
}
|
||||
)
|
||||
|
||||
return web.json_response({"success": True, "results": results})
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
@@ -557,17 +628,118 @@ class ModelLibraryHandler:
|
||||
logger.error("Failed to get model versions status: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def get_civitai_user_models(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
username = request.query.get("username")
|
||||
if not username:
|
||||
return web.json_response({"success": False, "error": "Missing required parameter: username"}, status=400)
|
||||
|
||||
metadata_provider = await self._metadata_provider_factory()
|
||||
if not metadata_provider:
|
||||
return web.json_response({"success": False, "error": "Metadata provider not available"}, status=503)
|
||||
|
||||
try:
|
||||
models = await metadata_provider.get_user_models(username)
|
||||
except NotImplementedError:
|
||||
return web.json_response({"success": False, "error": "Metadata provider does not support user model queries"}, status=501)
|
||||
|
||||
if models is None:
|
||||
return web.json_response({"success": False, "error": "Failed to fetch user models"}, status=502)
|
||||
|
||||
if not isinstance(models, list):
|
||||
models = []
|
||||
|
||||
lora_scanner = await self._service_registry.get_lora_scanner()
|
||||
checkpoint_scanner = await self._service_registry.get_checkpoint_scanner()
|
||||
embedding_scanner = await self._service_registry.get_embedding_scanner()
|
||||
|
||||
normalized_allowed_types = {model_type.lower() for model_type in CIVITAI_USER_MODEL_TYPES}
|
||||
lora_type_aliases = {model_type.lower() for model_type in VALID_LORA_TYPES}
|
||||
|
||||
type_scanner_map: Dict[str, object | None] = {
|
||||
**{alias: lora_scanner for alias in lora_type_aliases},
|
||||
"checkpoint": checkpoint_scanner,
|
||||
"textualinversion": embedding_scanner,
|
||||
}
|
||||
|
||||
versions: list[dict] = []
|
||||
for model in models:
|
||||
if not isinstance(model, dict):
|
||||
continue
|
||||
|
||||
model_type = str(model.get("type", "")).lower()
|
||||
if model_type not in normalized_allowed_types:
|
||||
continue
|
||||
|
||||
scanner = type_scanner_map.get(model_type)
|
||||
if scanner is None:
|
||||
return web.json_response({"success": False, "error": f'Scanner for type "{model_type}" is not available'}, status=503)
|
||||
|
||||
tags_value = model.get("tags")
|
||||
tags = tags_value if isinstance(tags_value, list) else []
|
||||
model_id = model.get("id")
|
||||
try:
|
||||
model_id_int = int(model_id)
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
model_name = model.get("name", "")
|
||||
|
||||
versions_data = model.get("modelVersions")
|
||||
if not isinstance(versions_data, list):
|
||||
continue
|
||||
|
||||
for version in versions_data:
|
||||
if not isinstance(version, dict):
|
||||
continue
|
||||
|
||||
version_id = version.get("id")
|
||||
try:
|
||||
version_id_int = int(version_id)
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
|
||||
images = version.get("images") or []
|
||||
thumbnail_url = None
|
||||
if images and isinstance(images, list):
|
||||
first_image = images[0]
|
||||
if isinstance(first_image, dict):
|
||||
raw_url = first_image.get("url")
|
||||
media_type = first_image.get("type")
|
||||
rewritten_url, _ = rewrite_preview_url(raw_url, media_type)
|
||||
thumbnail_url = rewritten_url
|
||||
|
||||
in_library = await scanner.check_model_version_exists(version_id_int)
|
||||
|
||||
versions.append(
|
||||
{
|
||||
"modelId": model_id_int,
|
||||
"versionId": version_id_int,
|
||||
"modelName": model_name,
|
||||
"versionName": version.get("name", ""),
|
||||
"type": model.get("type"),
|
||||
"tags": tags,
|
||||
"baseModel": version.get("baseModel"),
|
||||
"thumbnailUrl": thumbnail_url,
|
||||
"inLibrary": in_library,
|
||||
}
|
||||
)
|
||||
|
||||
return web.json_response({"success": True, "username": username, "versions": versions})
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.error("Failed to get Civitai user models: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
|
||||
class MetadataArchiveHandler:
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
metadata_archive_manager_factory: Callable[[], Awaitable[MetadataArchiveManagerProtocol]] = get_metadata_archive_manager,
|
||||
settings_service=default_settings,
|
||||
settings_service=None,
|
||||
metadata_provider_updater: Callable[[], Awaitable[None]] = update_metadata_providers,
|
||||
) -> None:
|
||||
self._metadata_archive_manager_factory = metadata_archive_manager_factory
|
||||
self._settings = settings_service
|
||||
self._settings = settings_service or get_settings_manager()
|
||||
self._metadata_provider_updater = metadata_provider_updater
|
||||
|
||||
async def download_metadata_archive(self, request: web.Request) -> web.Response:
|
||||
@@ -679,10 +851,21 @@ class NodeRegistryHandler:
|
||||
node_id = node.get("node_id")
|
||||
if node_id is None:
|
||||
return web.json_response({"success": False, "error": f"Node {index} missing node_id parameter"}, status=400)
|
||||
graph_id = node.get("graph_id")
|
||||
if graph_id is None:
|
||||
return web.json_response({"success": False, "error": f"Node {index} missing graph_id parameter"}, status=400)
|
||||
graph_name = node.get("graph_name")
|
||||
try:
|
||||
node["node_id"] = int(node_id)
|
||||
except (TypeError, ValueError):
|
||||
return web.json_response({"success": False, "error": f"Node {index} node_id must be an integer"}, status=400)
|
||||
node["graph_id"] = str(graph_id)
|
||||
if graph_name is None:
|
||||
node["graph_name"] = None
|
||||
elif isinstance(graph_name, str):
|
||||
node["graph_name"] = graph_name
|
||||
else:
|
||||
node["graph_name"] = str(graph_name)
|
||||
|
||||
await self._node_registry.register_nodes(nodes)
|
||||
return web.json_response({"success": True, "message": f"{len(nodes)} nodes registered successfully"})
|
||||
@@ -769,6 +952,7 @@ class MiscHandlerSet:
|
||||
"health_check": self.health.health_check,
|
||||
"get_settings": self.settings.get_settings,
|
||||
"update_settings": self.settings.update_settings,
|
||||
"get_priority_tags": self.settings.get_priority_tags,
|
||||
"get_settings_libraries": self.settings.get_libraries,
|
||||
"activate_library": self.settings.activate_library,
|
||||
"update_usage_stats": self.usage_stats.update_usage_stats,
|
||||
@@ -779,6 +963,7 @@ class MiscHandlerSet:
|
||||
"register_nodes": self.node_registry.register_nodes,
|
||||
"get_registry": self.node_registry.get_registry,
|
||||
"check_model_exists": self.model_library.check_model_exists,
|
||||
"get_civitai_user_models": self.model_library.get_civitai_user_models,
|
||||
"download_metadata_archive": self.metadata_archive.download_metadata_archive,
|
||||
"remove_metadata_archive": self.metadata_archive.remove_metadata_archive,
|
||||
"get_metadata_archive_status": self.metadata_archive.get_metadata_archive_status,
|
||||
|
||||
@@ -6,7 +6,7 @@ import json
|
||||
import logging
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from typing import Awaitable, Callable, Dict, Iterable, Mapping, Optional
|
||||
from typing import Awaitable, Callable, Dict, Iterable, List, Mapping, Optional
|
||||
|
||||
from aiohttp import web
|
||||
import jinja2
|
||||
@@ -29,7 +29,9 @@ from ...services.use_cases import (
|
||||
)
|
||||
from ...services.websocket_manager import WebSocketManager
|
||||
from ...services.websocket_progress_callback import WebSocketProgressCallback
|
||||
from ...services.errors import RateLimitError
|
||||
from ...utils.file_utils import calculate_sha256
|
||||
from ...utils.metadata_manager import MetadataManager
|
||||
|
||||
|
||||
class ModelPageView:
|
||||
@@ -164,6 +166,11 @@ class ModelListingHandler:
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
pass
|
||||
|
||||
has_update = request.query.get("has_update", "false")
|
||||
has_update_filter = (
|
||||
has_update.lower() in {"1", "true", "yes"} if isinstance(has_update, str) else False
|
||||
)
|
||||
|
||||
return {
|
||||
"page": page,
|
||||
"page_size": page_size,
|
||||
@@ -176,6 +183,7 @@ class ModelListingHandler:
|
||||
"search_options": search_options,
|
||||
"hash_filters": hash_filters,
|
||||
"favorites_only": favorites_only,
|
||||
"has_update": has_update_filter,
|
||||
**self._parse_specific_params(request),
|
||||
}
|
||||
|
||||
@@ -244,6 +252,8 @@ class ModelManagementHandler:
|
||||
if not model_data.get("sha256"):
|
||||
return web.json_response({"success": False, "error": "No SHA256 hash found"}, status=400)
|
||||
|
||||
await MetadataManager.hydrate_model_data(model_data)
|
||||
|
||||
success, error = await self._metadata_sync.fetch_and_update_model(
|
||||
sha256=model_data["sha256"],
|
||||
file_path=file_path,
|
||||
@@ -755,6 +765,30 @@ class ModelDownloadHandler:
|
||||
self._logger.error("Error cancelling download via GET: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def pause_download_get(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
download_id = request.query.get("download_id")
|
||||
if not download_id:
|
||||
return web.json_response({"success": False, "error": "Download ID is required"}, status=400)
|
||||
result = await self._download_coordinator.pause_download(download_id)
|
||||
status = 200 if result.get("success") else 400
|
||||
return web.json_response(result, status=status)
|
||||
except Exception as exc:
|
||||
self._logger.error("Error pausing download via GET: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def resume_download_get(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
download_id = request.query.get("download_id")
|
||||
if not download_id:
|
||||
return web.json_response({"success": False, "error": "Download ID is required"}, status=400)
|
||||
result = await self._download_coordinator.resume_download(download_id)
|
||||
status = 200 if result.get("success") else 400
|
||||
return web.json_response(result, status=status)
|
||||
except Exception as exc:
|
||||
self._logger.error("Error resuming download via GET: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def get_download_progress(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
download_id = request.match_info.get("download_id")
|
||||
@@ -763,7 +797,23 @@ class ModelDownloadHandler:
|
||||
progress_data = self._ws_manager.get_download_progress(download_id)
|
||||
if progress_data is None:
|
||||
return web.json_response({"success": False, "error": "Download ID not found"}, status=404)
|
||||
return web.json_response({"success": True, "progress": progress_data.get("progress", 0)})
|
||||
response_payload = {
|
||||
"success": True,
|
||||
"progress": progress_data.get("progress", 0),
|
||||
"bytes_downloaded": progress_data.get("bytes_downloaded"),
|
||||
"total_bytes": progress_data.get("total_bytes"),
|
||||
"bytes_per_second": progress_data.get("bytes_per_second", 0.0),
|
||||
}
|
||||
|
||||
status = progress_data.get("status")
|
||||
if status and status != "progress":
|
||||
response_payload["status"] = status
|
||||
if "message" in progress_data:
|
||||
response_payload["message"] = progress_data["message"]
|
||||
elif status is None and "message" in progress_data:
|
||||
response_payload["message"] = progress_data["message"]
|
||||
|
||||
return web.json_response(response_payload)
|
||||
except Exception as exc:
|
||||
self._logger.error("Error getting download progress: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
@@ -825,18 +875,30 @@ class ModelCivitaiHandler:
|
||||
status=400,
|
||||
)
|
||||
|
||||
cache = await self._service.scanner.get_cached_data()
|
||||
version_index = cache.version_index
|
||||
|
||||
for version in versions:
|
||||
model_file = self._find_model_file(version.get("files", [])) if isinstance(version.get("files"), Iterable) else None
|
||||
if model_file:
|
||||
hashes = model_file.get("hashes", {}) if isinstance(model_file, Mapping) else {}
|
||||
sha256 = hashes.get("SHA256") if isinstance(hashes, Mapping) else None
|
||||
if sha256:
|
||||
version["existsLocally"] = self._service.has_hash(sha256)
|
||||
if version["existsLocally"]:
|
||||
version["localPath"] = self._service.get_path_by_hash(sha256)
|
||||
version["modelSizeKB"] = model_file.get("sizeKB") if isinstance(model_file, Mapping) else None
|
||||
version_id = None
|
||||
version_id_raw = version.get("id")
|
||||
if version_id_raw is not None:
|
||||
try:
|
||||
version_id = int(str(version_id_raw))
|
||||
except (TypeError, ValueError):
|
||||
version_id = None
|
||||
|
||||
cache_entry = version_index.get(version_id) if (version_id is not None and version_index) else None
|
||||
version["existsLocally"] = cache_entry is not None
|
||||
if cache_entry and isinstance(cache_entry, Mapping):
|
||||
local_path = cache_entry.get("file_path")
|
||||
if local_path:
|
||||
version["localPath"] = local_path
|
||||
else:
|
||||
version["existsLocally"] = False
|
||||
version.pop("localPath", None)
|
||||
|
||||
model_file = self._find_model_file(version.get("files", [])) if isinstance(version.get("files"), Iterable) else None
|
||||
if model_file and isinstance(model_file, Mapping):
|
||||
version["modelSizeKB"] = model_file.get("sizeKB")
|
||||
return web.json_response(versions)
|
||||
except Exception as exc:
|
||||
self._logger.error("Error fetching %s model versions: %s", self._service.model_type, exc)
|
||||
@@ -962,6 +1024,156 @@ class ModelAutoOrganizeHandler:
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
|
||||
class ModelUpdateHandler:
|
||||
"""Handle update tracking requests."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
service,
|
||||
update_service,
|
||||
metadata_provider_selector,
|
||||
logger: logging.Logger,
|
||||
) -> None:
|
||||
self._service = service
|
||||
self._update_service = update_service
|
||||
self._metadata_provider_selector = metadata_provider_selector
|
||||
self._logger = logger
|
||||
|
||||
async def refresh_model_updates(self, request: web.Request) -> web.Response:
|
||||
payload = await self._read_json(request)
|
||||
force_refresh = self._parse_bool(request.query.get("force")) or self._parse_bool(
|
||||
payload.get("force")
|
||||
)
|
||||
provider = await self._get_civitai_provider()
|
||||
if provider is None:
|
||||
return web.json_response(
|
||||
{"success": False, "error": "Civitai provider not available"}, status=503
|
||||
)
|
||||
|
||||
try:
|
||||
records = await self._update_service.refresh_for_model_type(
|
||||
self._service.model_type,
|
||||
self._service.scanner,
|
||||
provider,
|
||||
force_refresh=force_refresh,
|
||||
)
|
||||
except RateLimitError as exc:
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(exc) or "Rate limited"}, status=429
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
self._logger.error("Failed to refresh model updates: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"success": True,
|
||||
"records": [self._serialize_record(record) for record in records.values()],
|
||||
}
|
||||
)
|
||||
|
||||
async def set_model_update_ignore(self, request: web.Request) -> web.Response:
|
||||
payload = await self._read_json(request)
|
||||
model_id = self._normalize_model_id(payload.get("modelId"))
|
||||
if model_id is None:
|
||||
return web.json_response({"success": False, "error": "modelId is required"}, status=400)
|
||||
|
||||
should_ignore = self._parse_bool(payload.get("shouldIgnore"))
|
||||
record = await self._update_service.set_should_ignore(
|
||||
self._service.model_type, model_id, should_ignore
|
||||
)
|
||||
return web.json_response({"success": True, "record": self._serialize_record(record)})
|
||||
|
||||
async def get_model_update_status(self, request: web.Request) -> web.Response:
|
||||
model_id = self._normalize_model_id(request.match_info.get("model_id"))
|
||||
if model_id is None:
|
||||
return web.json_response({"success": False, "error": "model_id must be an integer"}, status=400)
|
||||
|
||||
refresh = self._parse_bool(request.query.get("refresh"))
|
||||
force = self._parse_bool(request.query.get("force"))
|
||||
|
||||
try:
|
||||
record = await self._get_or_refresh_record(model_id, refresh=refresh, force=force)
|
||||
except RateLimitError as exc:
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(exc) or "Rate limited"}, status=429
|
||||
)
|
||||
|
||||
if record is None:
|
||||
return web.json_response(
|
||||
{"success": False, "error": "Model not tracked"}, status=404
|
||||
)
|
||||
|
||||
return web.json_response({"success": True, "record": self._serialize_record(record)})
|
||||
|
||||
async def _get_or_refresh_record(
|
||||
self, model_id: int, *, refresh: bool, force: bool
|
||||
) -> Optional[object]:
|
||||
record = await self._update_service.get_record(self._service.model_type, model_id)
|
||||
if record and not refresh and not force:
|
||||
return record
|
||||
|
||||
provider = await self._get_civitai_provider()
|
||||
if provider is None:
|
||||
return record
|
||||
|
||||
return await self._update_service.refresh_single_model(
|
||||
self._service.model_type,
|
||||
model_id,
|
||||
self._service.scanner,
|
||||
provider,
|
||||
force_refresh=force or refresh,
|
||||
)
|
||||
|
||||
async def _get_civitai_provider(self):
|
||||
try:
|
||||
return await self._metadata_provider_selector("civitai_api")
|
||||
except Exception as exc: # pragma: no cover - defensive log
|
||||
self._logger.error("Failed to acquire civitai provider: %s", exc, exc_info=True)
|
||||
return None
|
||||
|
||||
async def _read_json(self, request: web.Request) -> Dict:
|
||||
if not request.can_read_body:
|
||||
return {}
|
||||
try:
|
||||
return await request.json()
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def _parse_bool(value) -> bool:
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
return value.lower() in {"1", "true", "yes"}
|
||||
if isinstance(value, (int, float)):
|
||||
return bool(value)
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _normalize_model_id(value) -> Optional[int]:
|
||||
try:
|
||||
if value is None:
|
||||
return None
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _serialize_record(record) -> Dict:
|
||||
return {
|
||||
"modelType": record.model_type,
|
||||
"modelId": record.model_id,
|
||||
"largestVersionId": record.largest_version_id,
|
||||
"versionIds": record.version_ids,
|
||||
"inLibraryVersionIds": record.in_library_version_ids,
|
||||
"lastCheckedAt": record.last_checked_at,
|
||||
"shouldIgnore": record.should_ignore,
|
||||
"hasUpdate": record.has_update(),
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelHandlerSet:
|
||||
"""Aggregate concrete handlers into a flat mapping."""
|
||||
@@ -974,6 +1186,7 @@ class ModelHandlerSet:
|
||||
civitai: ModelCivitaiHandler
|
||||
move: ModelMoveHandler
|
||||
auto_organize: ModelAutoOrganizeHandler
|
||||
updates: ModelUpdateHandler
|
||||
|
||||
def to_route_mapping(self) -> Dict[str, Callable[[web.Request], Awaitable[web.Response]]]:
|
||||
return {
|
||||
@@ -1002,6 +1215,8 @@ class ModelHandlerSet:
|
||||
"download_model": self.download.download_model,
|
||||
"download_model_get": self.download.download_model_get,
|
||||
"cancel_download_get": self.download.cancel_download_get,
|
||||
"pause_download_get": self.download.pause_download_get,
|
||||
"resume_download_get": self.download.resume_download_get,
|
||||
"get_download_progress": self.download.get_download_progress,
|
||||
"get_civitai_versions": self.civitai.get_civitai_versions,
|
||||
"get_civitai_model_by_version": self.civitai.get_civitai_model_by_version,
|
||||
@@ -1016,5 +1231,8 @@ class ModelHandlerSet:
|
||||
"get_model_metadata": self.query.get_model_metadata,
|
||||
"get_model_description": self.query.get_model_description,
|
||||
"get_relative_paths": self.query.get_relative_paths,
|
||||
"refresh_model_updates": self.updates.refresh_model_updates,
|
||||
"set_model_update_ignore": self.updates.set_model_update_ignore,
|
||||
"get_model_update_status": self.updates.get_model_update_status,
|
||||
}
|
||||
|
||||
|
||||
@@ -23,8 +23,10 @@ class LoraRoutes(BaseModelRoutes):
|
||||
async def initialize_services(self):
|
||||
"""Initialize services from ServiceRegistry"""
|
||||
lora_scanner = await ServiceRegistry.get_lora_scanner()
|
||||
self.service = LoraService(lora_scanner)
|
||||
|
||||
update_service = await ServiceRegistry.get_model_update_service()
|
||||
self.service = LoraService(lora_scanner, update_service=update_service)
|
||||
self.set_model_update_service(update_service)
|
||||
|
||||
# Attach service dependencies
|
||||
self.attach_service(self.service)
|
||||
|
||||
@@ -229,11 +231,27 @@ class LoraRoutes(BaseModelRoutes):
|
||||
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
|
||||
|
||||
# Send update to all connected trigger word toggle nodes
|
||||
for node_id in node_ids:
|
||||
PromptServer.instance.send_sync("trigger_word_update", {
|
||||
"id": node_id,
|
||||
for entry in node_ids:
|
||||
node_identifier = entry
|
||||
graph_identifier = None
|
||||
if isinstance(entry, dict):
|
||||
node_identifier = entry.get("node_id")
|
||||
graph_identifier = entry.get("graph_id")
|
||||
|
||||
try:
|
||||
parsed_node_id = int(node_identifier)
|
||||
except (TypeError, ValueError):
|
||||
parsed_node_id = node_identifier
|
||||
|
||||
payload = {
|
||||
"id": parsed_node_id,
|
||||
"message": trigger_words_text
|
||||
})
|
||||
}
|
||||
|
||||
if graph_identifier is not None:
|
||||
payload["graph_id"] = str(graph_identifier)
|
||||
|
||||
PromptServer.instance.send_sync("trigger_word_update", payload)
|
||||
|
||||
return web.json_response({"success": True})
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ class RouteDefinition:
|
||||
MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("GET", "/api/lm/settings", "get_settings"),
|
||||
RouteDefinition("POST", "/api/lm/settings", "update_settings"),
|
||||
RouteDefinition("GET", "/api/lm/priority-tags", "get_priority_tags"),
|
||||
RouteDefinition("GET", "/api/lm/settings/libraries", "get_settings_libraries"),
|
||||
RouteDefinition("POST", "/api/lm/settings/libraries/activate", "activate_library"),
|
||||
RouteDefinition("GET", "/api/lm/health-check", "health_check"),
|
||||
@@ -34,6 +35,7 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("POST", "/api/lm/register-nodes", "register_nodes"),
|
||||
RouteDefinition("GET", "/api/lm/get-registry", "get_registry"),
|
||||
RouteDefinition("GET", "/api/lm/check-model-exists", "check_model_exists"),
|
||||
RouteDefinition("GET", "/api/lm/civitai/user-models", "get_civitai_user_models"),
|
||||
RouteDefinition("POST", "/api/lm/download-metadata-archive", "download_metadata_archive"),
|
||||
RouteDefinition("POST", "/api/lm/remove-metadata-archive", "remove_metadata_archive"),
|
||||
RouteDefinition("GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"),
|
||||
|
||||
@@ -14,7 +14,7 @@ from ..services.metadata_service import (
|
||||
get_metadata_provider,
|
||||
update_metadata_providers,
|
||||
)
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..services.downloader import get_downloader
|
||||
from ..utils.usage_stats import UsageStats
|
||||
from .handlers.misc_handlers import (
|
||||
@@ -47,7 +47,7 @@ class MiscRoutes:
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
settings_service=settings,
|
||||
settings_service=None,
|
||||
usage_stats_factory: Callable[[], UsageStats] = UsageStats,
|
||||
prompt_server: type[PromptServer] = PromptServer,
|
||||
service_registry_adapter=build_service_registry_adapter(),
|
||||
@@ -60,7 +60,7 @@ class MiscRoutes:
|
||||
node_registry: NodeRegistry | None = None,
|
||||
standalone_mode_flag: bool = standalone_mode,
|
||||
) -> None:
|
||||
self._settings = settings_service
|
||||
self._settings = settings_service or get_settings_manager()
|
||||
self._usage_stats_factory = usage_stats_factory
|
||||
self._prompt_server = prompt_server
|
||||
self._service_registry_adapter = service_registry_adapter
|
||||
|
||||
@@ -55,9 +55,14 @@ COMMON_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/civitai/versions/{model_id}", "get_civitai_versions"),
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/civitai/model/version/{modelVersionId}", "get_civitai_model_by_version"),
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/civitai/model/hash/{hash}", "get_civitai_model_by_hash"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/updates/refresh", "refresh_model_updates"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/updates/ignore", "set_model_update_ignore"),
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/updates/status/{model_id}", "get_model_update_status"),
|
||||
RouteDefinition("POST", "/api/lm/download-model", "download_model"),
|
||||
RouteDefinition("GET", "/api/lm/download-model-get", "download_model_get"),
|
||||
RouteDefinition("GET", "/api/lm/cancel-download-get", "cancel_download_get"),
|
||||
RouteDefinition("GET", "/api/lm/pause-download", "pause_download_get"),
|
||||
RouteDefinition("GET", "/api/lm/resume-download", "resume_download_get"),
|
||||
RouteDefinition("GET", "/api/lm/download-progress/{download_id}", "get_download_progress"),
|
||||
RouteDefinition("GET", "/{prefix}", "handle_models_page"),
|
||||
)
|
||||
|
||||
@@ -8,13 +8,32 @@ from collections import defaultdict, Counter
|
||||
from typing import Dict, List, Any
|
||||
|
||||
from ..config import config
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..services.server_i18n import server_i18n
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..utils.usage_stats import UsageStats
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class _SettingsProxy:
|
||||
def __init__(self):
|
||||
self._manager = None
|
||||
|
||||
def _resolve(self):
|
||||
if self._manager is None:
|
||||
self._manager = get_settings_manager()
|
||||
return self._manager
|
||||
|
||||
def get(self, *args, **kwargs):
|
||||
return self._resolve().get(*args, **kwargs)
|
||||
|
||||
def __getattr__(self, item):
|
||||
return getattr(self._resolve(), item)
|
||||
|
||||
|
||||
settings = _SettingsProxy()
|
||||
|
||||
class StatsRoutes:
|
||||
"""Route handlers for Statistics page and API endpoints"""
|
||||
|
||||
@@ -66,7 +85,9 @@ class StatsRoutes:
|
||||
is_initializing = lora_initializing or checkpoint_initializing or embedding_initializing
|
||||
|
||||
# 获取用户语言设置
|
||||
user_language = settings.get('language', 'en')
|
||||
settings_object = settings
|
||||
user_language = settings_object.get('language', 'en')
|
||||
settings_manager = settings_object if not isinstance(settings_object, _SettingsProxy) else settings_object._resolve()
|
||||
|
||||
# 设置服务端i18n语言
|
||||
server_i18n.set_locale(user_language)
|
||||
@@ -79,7 +100,7 @@ class StatsRoutes:
|
||||
template = self.template_env.get_template('statistics.html')
|
||||
rendered = template.render(
|
||||
is_initializing=is_initializing,
|
||||
settings=settings,
|
||||
settings=settings_manager,
|
||||
request=request,
|
||||
t=server_i18n.get_translation,
|
||||
)
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, List, Optional, Type
|
||||
import asyncio
|
||||
from typing import Dict, List, Optional, Type, TYPE_CHECKING
|
||||
import logging
|
||||
import os
|
||||
|
||||
from ..utils.models import BaseModelMetadata
|
||||
from ..utils.metadata_manager import MetadataManager
|
||||
from .model_query import FilterCriteria, ModelCacheRepository, ModelFilterSet, SearchStrategy, SettingsProvider
|
||||
from .settings_manager import settings as default_settings
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .model_update_service import ModelUpdateService
|
||||
|
||||
class BaseModelService(ABC):
|
||||
"""Base service class for all model types"""
|
||||
|
||||
@@ -23,6 +27,7 @@ class BaseModelService(ABC):
|
||||
filter_set: Optional[ModelFilterSet] = None,
|
||||
search_strategy: Optional[SearchStrategy] = None,
|
||||
settings_provider: Optional[SettingsProvider] = None,
|
||||
update_service: Optional["ModelUpdateService"] = None,
|
||||
):
|
||||
"""Initialize the service.
|
||||
|
||||
@@ -34,14 +39,16 @@ class BaseModelService(ABC):
|
||||
filter_set: Filter component controlling folder/tag/favorites logic.
|
||||
search_strategy: Search component for fuzzy/text matching.
|
||||
settings_provider: Settings object; defaults to the global settings manager.
|
||||
update_service: Service used to determine whether models have remote updates available.
|
||||
"""
|
||||
self.model_type = model_type
|
||||
self.scanner = scanner
|
||||
self.metadata_class = metadata_class
|
||||
self.settings = settings_provider or default_settings
|
||||
self.settings = settings_provider or get_settings_manager()
|
||||
self.cache_repository = cache_repository or ModelCacheRepository(scanner)
|
||||
self.filter_set = filter_set or ModelFilterSet(self.settings)
|
||||
self.search_strategy = search_strategy or SearchStrategy()
|
||||
self.update_service = update_service
|
||||
|
||||
async def get_paginated_data(
|
||||
self,
|
||||
@@ -56,6 +63,7 @@ class BaseModelService(ABC):
|
||||
search_options: dict = None,
|
||||
hash_filters: dict = None,
|
||||
favorites_only: bool = False,
|
||||
has_update: bool = False,
|
||||
**kwargs,
|
||||
) -> Dict:
|
||||
"""Get paginated and filtered model data"""
|
||||
@@ -85,6 +93,9 @@ class BaseModelService(ABC):
|
||||
|
||||
filtered_data = await self._apply_specific_filters(filtered_data, **kwargs)
|
||||
|
||||
if has_update:
|
||||
filtered_data = await self._apply_update_filter(filtered_data)
|
||||
|
||||
return self._paginate(filtered_data, page, page_size)
|
||||
|
||||
|
||||
@@ -144,6 +155,59 @@ class BaseModelService(ABC):
|
||||
async def _apply_specific_filters(self, data: List[Dict], **kwargs) -> List[Dict]:
|
||||
"""Apply model-specific filters - to be overridden by subclasses if needed"""
|
||||
return data
|
||||
|
||||
async def _apply_update_filter(self, data: List[Dict]) -> List[Dict]:
|
||||
"""Filter models to those with remote updates available when requested."""
|
||||
if not data:
|
||||
return []
|
||||
if self.update_service is None:
|
||||
logger.warning(
|
||||
"Requested has_update filter for %s models but update service is unavailable",
|
||||
self.model_type,
|
||||
)
|
||||
return []
|
||||
|
||||
candidates: List[tuple[Dict, int]] = []
|
||||
for item in data:
|
||||
model_id = self._extract_model_id(item)
|
||||
if model_id is not None:
|
||||
candidates.append((item, model_id))
|
||||
|
||||
if not candidates:
|
||||
return []
|
||||
|
||||
tasks = [
|
||||
self.update_service.has_update(self.model_type, model_id)
|
||||
for _, model_id in candidates
|
||||
]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
filtered: List[Dict] = []
|
||||
for (item, model_id), result in zip(candidates, results):
|
||||
if isinstance(result, Exception):
|
||||
logger.error(
|
||||
"Failed to resolve update status for model %s (%s): %s",
|
||||
model_id,
|
||||
self.model_type,
|
||||
result,
|
||||
)
|
||||
continue
|
||||
if result:
|
||||
filtered.append(item)
|
||||
return filtered
|
||||
|
||||
@staticmethod
|
||||
def _extract_model_id(item: Dict) -> Optional[int]:
|
||||
civitai = item.get('civitai') if isinstance(item, dict) else None
|
||||
if not isinstance(civitai, dict):
|
||||
return None
|
||||
try:
|
||||
value = civitai.get('modelId')
|
||||
if value is None:
|
||||
return None
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
def _paginate(self, data: List[Dict], page: int, page_size: int) -> Dict:
|
||||
"""Apply pagination to filtered data"""
|
||||
@@ -373,4 +437,4 @@ class BaseModelService(ABC):
|
||||
x.lower() # Then alphabetically
|
||||
))
|
||||
|
||||
return matching_paths[:limit]
|
||||
return matching_paths[:limit]
|
||||
|
||||
@@ -11,13 +11,14 @@ logger = logging.getLogger(__name__)
|
||||
class CheckpointService(BaseModelService):
|
||||
"""Checkpoint-specific service implementation"""
|
||||
|
||||
def __init__(self, scanner):
|
||||
def __init__(self, scanner, update_service=None):
|
||||
"""Initialize Checkpoint service
|
||||
|
||||
Args:
|
||||
scanner: Checkpoint scanner instance
|
||||
update_service: Optional service for remote update tracking.
|
||||
"""
|
||||
super().__init__("checkpoint", scanner, CheckpointMetadata)
|
||||
super().__init__("checkpoint", scanner, CheckpointMetadata, update_service=update_service)
|
||||
|
||||
async def format_response(self, checkpoint_data: Dict) -> Dict:
|
||||
"""Format Checkpoint data for API response"""
|
||||
@@ -46,4 +47,4 @@ class CheckpointService(BaseModelService):
|
||||
|
||||
def find_duplicate_filenames(self) -> Dict:
|
||||
"""Find Checkpoints with conflicting filenames"""
|
||||
return self.scanner._hash_index.get_duplicate_filenames()
|
||||
return self.scanner._hash_index.get_duplicate_filenames()
|
||||
|
||||
554
py/services/civarchive_client.py
Normal file
554
py/services/civarchive_client.py
Normal file
@@ -0,0 +1,554 @@
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
import asyncio
|
||||
from copy import deepcopy
|
||||
from typing import Optional, Dict, Tuple, List
|
||||
from .model_metadata_provider import CivArchiveModelMetadataProvider, ModelMetadataProviderManager
|
||||
from .downloader import get_downloader
|
||||
from .errors import RateLimitError
|
||||
|
||||
try:
|
||||
from bs4 import BeautifulSoup
|
||||
except ImportError as exc:
|
||||
BeautifulSoup = None # type: ignore[assignment]
|
||||
_BS4_IMPORT_ERROR = exc
|
||||
else:
|
||||
_BS4_IMPORT_ERROR = None
|
||||
|
||||
def _require_beautifulsoup():
|
||||
if BeautifulSoup is None:
|
||||
raise RuntimeError(
|
||||
"BeautifulSoup (bs4) is required for CivArchive client. "
|
||||
"Install it with 'pip install beautifulsoup4'."
|
||||
) from _BS4_IMPORT_ERROR
|
||||
return BeautifulSoup
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class CivArchiveClient:
|
||||
_instance = None
|
||||
_lock = asyncio.Lock()
|
||||
|
||||
@classmethod
|
||||
async def get_instance(cls):
|
||||
"""Get singleton instance of CivArchiveClient"""
|
||||
async with cls._lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = cls()
|
||||
|
||||
# Register this client as a metadata provider
|
||||
provider_manager = await ModelMetadataProviderManager.get_instance()
|
||||
provider_manager.register_provider('civarchive', CivArchiveModelMetadataProvider(cls._instance), False)
|
||||
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
# Check if already initialized for singleton pattern
|
||||
if hasattr(self, '_initialized'):
|
||||
return
|
||||
self._initialized = True
|
||||
|
||||
self.base_url = "https://civarchive.com/api"
|
||||
|
||||
async def _request_json(
|
||||
self,
|
||||
path: str,
|
||||
params: Optional[Dict[str, str]] = None
|
||||
) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
"""Call CivArchive API and return JSON payload"""
|
||||
success, payload = await self._make_request(path, params=params)
|
||||
if not success:
|
||||
error = payload if isinstance(payload, str) else "Request failed"
|
||||
return None, error
|
||||
if not isinstance(payload, dict):
|
||||
return None, "Invalid response structure"
|
||||
return payload, None
|
||||
|
||||
async def _make_request(
|
||||
self,
|
||||
path: str,
|
||||
*,
|
||||
params: Optional[Dict[str, str]] = None,
|
||||
) -> Tuple[bool, Dict | str]:
|
||||
"""Wrapper around downloader.make_request that surfaces rate limits."""
|
||||
|
||||
downloader = await get_downloader()
|
||||
kwargs: Dict[str, Dict[str, str]] = {}
|
||||
if params:
|
||||
safe_params = {str(key): str(value) for key, value in params.items() if value is not None}
|
||||
if safe_params:
|
||||
kwargs["params"] = safe_params
|
||||
|
||||
success, payload = await downloader.make_request(
|
||||
"GET",
|
||||
f"{self.base_url}{path}",
|
||||
use_auth=False,
|
||||
**kwargs,
|
||||
)
|
||||
if not success and isinstance(payload, RateLimitError):
|
||||
if payload.provider is None:
|
||||
payload.provider = "civarchive_api"
|
||||
raise payload
|
||||
return success, payload
|
||||
|
||||
@staticmethod
|
||||
def _normalize_payload(payload: Dict) -> Dict:
|
||||
"""Unwrap CivArchive responses that wrap content under a data key"""
|
||||
if not isinstance(payload, dict):
|
||||
return {}
|
||||
data = payload.get("data")
|
||||
if isinstance(data, dict):
|
||||
return data
|
||||
return payload
|
||||
|
||||
@staticmethod
|
||||
def _split_context(payload: Dict) -> Tuple[Dict, Dict, List[Dict]]:
|
||||
"""Separate version payload from surrounding model context"""
|
||||
data = CivArchiveClient._normalize_payload(payload)
|
||||
context: Dict = {}
|
||||
fallback_files: List[Dict] = []
|
||||
version: Dict = {}
|
||||
|
||||
for key, value in data.items():
|
||||
if key in {"version", "model"}:
|
||||
continue
|
||||
context[key] = value
|
||||
|
||||
if isinstance(data.get("version"), dict):
|
||||
version = data["version"]
|
||||
|
||||
model_block = data.get("model")
|
||||
if isinstance(model_block, dict):
|
||||
for key, value in model_block.items():
|
||||
if key == "version":
|
||||
if not version and isinstance(value, dict):
|
||||
version = value
|
||||
continue
|
||||
context.setdefault(key, value)
|
||||
fallback_files = fallback_files or model_block.get("files") or []
|
||||
|
||||
fallback_files = fallback_files or data.get("files") or []
|
||||
return context, version, fallback_files
|
||||
|
||||
@staticmethod
|
||||
def _ensure_list(value) -> List:
|
||||
if isinstance(value, list):
|
||||
return value
|
||||
if value is None:
|
||||
return []
|
||||
return [value]
|
||||
|
||||
@staticmethod
|
||||
def _build_model_info(context: Dict) -> Dict:
|
||||
tags = context.get("tags")
|
||||
if not isinstance(tags, list):
|
||||
tags = list(tags) if isinstance(tags, (set, tuple)) else ([] if tags is None else [tags])
|
||||
return {
|
||||
"name": context.get("name"),
|
||||
"type": context.get("type"),
|
||||
"nsfw": bool(context.get("is_nsfw", context.get("nsfw", False))),
|
||||
"description": context.get("description"),
|
||||
"tags": tags,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _build_creator_info(context: Dict) -> Dict:
|
||||
username = context.get("creator_username") or context.get("username") or ""
|
||||
image = context.get("creator_image") or context.get("creator_avatar") or ""
|
||||
creator: Dict[str, Optional[str]] = {
|
||||
"username": username,
|
||||
"image": image,
|
||||
}
|
||||
if context.get("creator_name"):
|
||||
creator["name"] = context["creator_name"]
|
||||
if context.get("creator_url"):
|
||||
creator["url"] = context["creator_url"]
|
||||
return creator
|
||||
|
||||
@staticmethod
|
||||
def _transform_file_entry(file_data: Dict) -> Dict:
|
||||
mirrors = file_data.get("mirrors") or []
|
||||
if not isinstance(mirrors, list):
|
||||
mirrors = [mirrors]
|
||||
available_mirror = next(
|
||||
(mirror for mirror in mirrors if isinstance(mirror, dict) and mirror.get("deletedAt") is None),
|
||||
None
|
||||
)
|
||||
download_url = file_data.get("downloadUrl")
|
||||
if not download_url and available_mirror:
|
||||
download_url = available_mirror.get("url")
|
||||
name = file_data.get("name")
|
||||
if not name and available_mirror:
|
||||
name = available_mirror.get("filename")
|
||||
|
||||
transformed: Dict = {
|
||||
"id": file_data.get("id"),
|
||||
"sizeKB": file_data.get("sizeKB"),
|
||||
"name": name,
|
||||
"type": file_data.get("type"),
|
||||
"downloadUrl": download_url,
|
||||
"primary": True,
|
||||
# TODO: for some reason is_primary is false in CivArchive response, need to figure this out,
|
||||
# "primary": bool(file_data.get("is_primary", file_data.get("primary", False))),
|
||||
"mirrors": mirrors,
|
||||
}
|
||||
|
||||
sha256 = file_data.get("sha256")
|
||||
if sha256:
|
||||
transformed["hashes"] = {"SHA256": str(sha256).upper()}
|
||||
elif isinstance(file_data.get("hashes"), dict):
|
||||
transformed["hashes"] = file_data["hashes"]
|
||||
|
||||
if "metadata" in file_data:
|
||||
transformed["metadata"] = file_data["metadata"]
|
||||
|
||||
if file_data.get("modelVersionId") is not None:
|
||||
transformed["modelVersionId"] = file_data.get("modelVersionId")
|
||||
elif file_data.get("model_version_id") is not None:
|
||||
transformed["modelVersionId"] = file_data.get("model_version_id")
|
||||
|
||||
if file_data.get("modelId") is not None:
|
||||
transformed["modelId"] = file_data.get("modelId")
|
||||
elif file_data.get("model_id") is not None:
|
||||
transformed["modelId"] = file_data.get("model_id")
|
||||
|
||||
return transformed
|
||||
|
||||
def _transform_files(
|
||||
self,
|
||||
files: Optional[List[Dict]],
|
||||
fallback_files: Optional[List[Dict]] = None
|
||||
) -> List[Dict]:
|
||||
candidates: List[Dict] = []
|
||||
if isinstance(files, list) and files:
|
||||
candidates = files
|
||||
elif isinstance(fallback_files, list):
|
||||
candidates = fallback_files
|
||||
|
||||
transformed_files: List[Dict] = []
|
||||
for file_data in candidates:
|
||||
if isinstance(file_data, dict):
|
||||
transformed_files.append(self._transform_file_entry(file_data))
|
||||
return transformed_files
|
||||
|
||||
def _transform_version(
|
||||
self,
|
||||
context: Dict,
|
||||
version: Dict,
|
||||
fallback_files: Optional[List[Dict]] = None
|
||||
) -> Optional[Dict]:
|
||||
if not version:
|
||||
return None
|
||||
|
||||
version_copy = deepcopy(version)
|
||||
version_copy.pop("model", None)
|
||||
version_copy.pop("creator", None)
|
||||
|
||||
if "trigger" in version_copy:
|
||||
triggers = version_copy.pop("trigger")
|
||||
if isinstance(triggers, list):
|
||||
version_copy["trainedWords"] = triggers
|
||||
elif triggers is None:
|
||||
version_copy["trainedWords"] = []
|
||||
else:
|
||||
version_copy["trainedWords"] = [triggers]
|
||||
|
||||
if "trainedWords" in version_copy and isinstance(version_copy["trainedWords"], str):
|
||||
version_copy["trainedWords"] = [version_copy["trainedWords"]]
|
||||
|
||||
if "nsfw_level" in version_copy:
|
||||
version_copy["nsfwLevel"] = version_copy.pop("nsfw_level")
|
||||
elif "nsfwLevel" not in version_copy and context.get("nsfw_level") is not None:
|
||||
version_copy["nsfwLevel"] = context.get("nsfw_level")
|
||||
|
||||
stats_keys = ["downloadCount", "ratingCount", "rating"]
|
||||
stats = {key: version_copy.pop(key) for key in stats_keys if key in version_copy}
|
||||
if stats:
|
||||
version_copy["stats"] = stats
|
||||
|
||||
version_copy["files"] = self._transform_files(version_copy.get("files"), fallback_files)
|
||||
version_copy["images"] = self._ensure_list(version_copy.get("images"))
|
||||
|
||||
version_copy["model"] = self._build_model_info(context)
|
||||
version_copy["creator"] = self._build_creator_info(context)
|
||||
|
||||
version_copy["source"] = "civarchive"
|
||||
version_copy["is_deleted"] = bool(context.get("deletedAt")) or bool(version.get("deletedAt"))
|
||||
|
||||
return version_copy
|
||||
|
||||
async def _resolve_version_from_files(self, payload: Dict) -> Optional[Dict]:
|
||||
"""Fallback to fetch version data when only file metadata is available"""
|
||||
data = self._normalize_payload(payload)
|
||||
files = data.get("files") or payload.get("files") or []
|
||||
if not isinstance(files, list):
|
||||
files = [files]
|
||||
for file_data in files:
|
||||
if not isinstance(file_data, dict):
|
||||
continue
|
||||
model_id = file_data.get("model_id") or file_data.get("modelId")
|
||||
version_id = file_data.get("model_version_id") or file_data.get("modelVersionId")
|
||||
if model_id is None or version_id is None:
|
||||
continue
|
||||
resolved = await self.get_model_version(model_id, version_id)
|
||||
if resolved:
|
||||
return resolved
|
||||
return None
|
||||
|
||||
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
"""Find model by SHA256 hash value using CivArchive API"""
|
||||
try:
|
||||
payload, error = await self._request_json(f"/sha256/{model_hash.lower()}")
|
||||
if error:
|
||||
if "not found" in error.lower():
|
||||
return None, "Model not found"
|
||||
return None, error
|
||||
|
||||
context, version_data, fallback_files = self._split_context(payload)
|
||||
transformed = self._transform_version(context, version_data, fallback_files)
|
||||
if transformed:
|
||||
return transformed, None
|
||||
|
||||
resolved = await self._resolve_version_from_files(payload)
|
||||
if resolved:
|
||||
return resolved, None
|
||||
|
||||
logger.error("Error fetching version of CivArchive model by hash %s", model_hash[:10])
|
||||
return None, "No version data found"
|
||||
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching CivArchive model by hash {model_hash[:10]}: {e}")
|
||||
return None, str(e)
|
||||
|
||||
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
|
||||
"""Get all versions of a model using CivArchive API"""
|
||||
try:
|
||||
payload, error = await self._request_json(f"/models/{model_id}")
|
||||
if error or payload is None:
|
||||
if error and "not found" in error.lower():
|
||||
return None
|
||||
logger.error(f"Error fetching CivArchive model versions for {model_id}: {error}")
|
||||
return None
|
||||
|
||||
data = self._normalize_payload(payload)
|
||||
context, version_data, fallback_files = self._split_context(payload)
|
||||
|
||||
versions_meta = data.get("versions") or []
|
||||
transformed_versions: List[Dict] = []
|
||||
for meta in versions_meta:
|
||||
if not isinstance(meta, dict):
|
||||
continue
|
||||
version_id = meta.get("id")
|
||||
if version_id is None:
|
||||
continue
|
||||
target_model_id = meta.get("modelId") or model_id
|
||||
version = await self.get_model_version(target_model_id, version_id)
|
||||
if version:
|
||||
transformed_versions.append(version)
|
||||
|
||||
# Ensure the primary version is included even if versions list was empty
|
||||
primary_version = self._transform_version(context, version_data, fallback_files)
|
||||
if primary_version:
|
||||
transformed_versions.insert(0, primary_version)
|
||||
|
||||
ordered_versions: List[Dict] = []
|
||||
seen_ids = set()
|
||||
for version in transformed_versions:
|
||||
version_id = version.get("id")
|
||||
if version_id in seen_ids:
|
||||
continue
|
||||
seen_ids.add(version_id)
|
||||
ordered_versions.append(version)
|
||||
|
||||
return {
|
||||
"modelVersions": ordered_versions,
|
||||
"type": context.get("type", ""),
|
||||
"name": context.get("name", ""),
|
||||
}
|
||||
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching CivArchive model versions for {model_id}: {e}")
|
||||
return None
|
||||
|
||||
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
|
||||
"""Get specific model version using CivArchive API
|
||||
|
||||
Args:
|
||||
model_id: The model ID (required)
|
||||
version_id: Optional specific version ID to filter to
|
||||
|
||||
Returns:
|
||||
Optional[Dict]: The model version data or None if not found
|
||||
"""
|
||||
if model_id is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
params = {"modelVersionId": version_id} if version_id is not None else None
|
||||
payload, error = await self._request_json(f"/models/{model_id}", params=params)
|
||||
if error or payload is None:
|
||||
if error and "not found" in error.lower():
|
||||
return None
|
||||
logger.error(f"Error fetching CivArchive model version via API {model_id}/{version_id}: {error}")
|
||||
return None
|
||||
|
||||
context, version_data, fallback_files = self._split_context(payload)
|
||||
|
||||
if not version_data:
|
||||
return await self._resolve_version_from_files(payload)
|
||||
|
||||
if version_id is not None:
|
||||
raw_id = version_data.get("id")
|
||||
if raw_id != version_id:
|
||||
logger.warning(
|
||||
"Requested version %s doesn't match default version %s for model %s",
|
||||
version_id,
|
||||
raw_id,
|
||||
model_id,
|
||||
)
|
||||
return None
|
||||
actual_model_id = version_data.get("modelId")
|
||||
context_model_id = context.get("id")
|
||||
# CivArchive can respond with data for a different model id while already
|
||||
# returning the fully resolved model context. Only follow the redirect when
|
||||
# the context itself still points to the original (wrong) model.
|
||||
if (
|
||||
actual_model_id is not None
|
||||
and str(actual_model_id) != str(model_id)
|
||||
and (context_model_id is None or str(context_model_id) != str(actual_model_id))
|
||||
):
|
||||
return await self.get_model_version(actual_model_id, version_id)
|
||||
|
||||
return self._transform_version(context, version_data, fallback_files)
|
||||
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching CivArchive model version via API {model_id}/{version_id}: {e}")
|
||||
return None
|
||||
|
||||
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
""" Fetch model version metadata using a known bogus model lookup
|
||||
CivArchive lacks a direct version lookup API, this uses a workaround (which we handle in the main model request now)
|
||||
|
||||
Args:
|
||||
version_id: The model version ID
|
||||
|
||||
Returns:
|
||||
Tuple[Optional[Dict], Optional[str]]: (version_data, error_message)
|
||||
"""
|
||||
version = await self.get_model_version(1, version_id)
|
||||
if version is None:
|
||||
return None, "Model not found"
|
||||
return version, None
|
||||
|
||||
async def get_model_by_url(self, url) -> Optional[Dict]:
|
||||
"""Get specific model version by parsing CivArchive HTML page (legacy method)
|
||||
|
||||
This is the original HTML scraping implementation, kept for reference and new sites added not in api.
|
||||
The primary get_model_version() now uses the API instead.
|
||||
"""
|
||||
|
||||
try:
|
||||
# Construct CivArchive URL
|
||||
url = f"https://civarchive.com/{url}"
|
||||
downloader = await get_downloader()
|
||||
session = await downloader.session
|
||||
async with session.get(url) as response:
|
||||
if response.status != 200:
|
||||
return None
|
||||
|
||||
html_content = await response.text()
|
||||
|
||||
# Parse HTML to extract JSON data
|
||||
soup_parser = _require_beautifulsoup()
|
||||
soup = soup_parser(html_content, 'html.parser')
|
||||
script_tag = soup.find('script', {'id': '__NEXT_DATA__', 'type': 'application/json'})
|
||||
|
||||
if not script_tag:
|
||||
return None
|
||||
|
||||
# Parse JSON content
|
||||
json_data = json.loads(script_tag.string)
|
||||
model_data = json_data.get('props', {}).get('pageProps', {}).get('model')
|
||||
|
||||
if not model_data or 'version' not in model_data:
|
||||
return None
|
||||
|
||||
# Extract version data as base
|
||||
version = model_data['version'].copy()
|
||||
|
||||
# Restructure stats
|
||||
if 'downloadCount' in version and 'ratingCount' in version and 'rating' in version:
|
||||
version['stats'] = {
|
||||
'downloadCount': version.pop('downloadCount'),
|
||||
'ratingCount': version.pop('ratingCount'),
|
||||
'rating': version.pop('rating')
|
||||
}
|
||||
|
||||
# Rename trigger to trainedWords
|
||||
if 'trigger' in version:
|
||||
version['trainedWords'] = version.pop('trigger')
|
||||
|
||||
# Transform files data to expected format
|
||||
if 'files' in version:
|
||||
transformed_files = []
|
||||
for file_data in version['files']:
|
||||
# Find first available mirror (deletedAt is null)
|
||||
available_mirror = None
|
||||
for mirror in file_data.get('mirrors', []):
|
||||
if mirror.get('deletedAt') is None:
|
||||
available_mirror = mirror
|
||||
break
|
||||
|
||||
# Create transformed file entry
|
||||
transformed_file = {
|
||||
'id': file_data.get('id'),
|
||||
'sizeKB': file_data.get('sizeKB'),
|
||||
'name': available_mirror.get('filename', file_data.get('name')) if available_mirror else file_data.get('name'),
|
||||
'type': file_data.get('type'),
|
||||
'downloadUrl': available_mirror.get('url') if available_mirror else None,
|
||||
'primary': file_data.get('is_primary', False),
|
||||
'mirrors': file_data.get('mirrors', [])
|
||||
}
|
||||
|
||||
# Transform hash format
|
||||
if 'sha256' in file_data:
|
||||
transformed_file['hashes'] = {
|
||||
'SHA256': file_data['sha256'].upper()
|
||||
}
|
||||
|
||||
transformed_files.append(transformed_file)
|
||||
|
||||
version['files'] = transformed_files
|
||||
|
||||
# Add model information
|
||||
version['model'] = {
|
||||
'name': model_data.get('name'),
|
||||
'type': model_data.get('type'),
|
||||
'nsfw': model_data.get('is_nsfw', False),
|
||||
'description': model_data.get('description'),
|
||||
'tags': model_data.get('tags', [])
|
||||
}
|
||||
|
||||
version['creator'] = {
|
||||
'username': model_data.get('username'),
|
||||
'image': ''
|
||||
}
|
||||
|
||||
# Add source identifier
|
||||
version['source'] = 'civarchive'
|
||||
version['is_deleted'] = json_data.get('query', {}).get('is_deleted', False)
|
||||
|
||||
return version
|
||||
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching CivArchive model version (scraping) {url}: {e}")
|
||||
return None
|
||||
@@ -1,10 +1,11 @@
|
||||
import os
|
||||
import asyncio
|
||||
import copy
|
||||
import logging
|
||||
import asyncio
|
||||
import os
|
||||
from typing import Optional, Dict, Tuple, List
|
||||
from .model_metadata_provider import CivitaiModelMetadataProvider, ModelMetadataProviderManager
|
||||
from .downloader import get_downloader
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -33,6 +34,29 @@ class CivitaiClient:
|
||||
|
||||
self.base_url = "https://civitai.com/api/v1"
|
||||
|
||||
async def _make_request(
|
||||
self,
|
||||
method: str,
|
||||
url: str,
|
||||
*,
|
||||
use_auth: bool = False,
|
||||
**kwargs,
|
||||
) -> Tuple[bool, Dict | str]:
|
||||
"""Wrapper around downloader.make_request that surfaces rate limits."""
|
||||
|
||||
downloader = await get_downloader()
|
||||
success, result = await downloader.make_request(
|
||||
method,
|
||||
url,
|
||||
use_auth=use_auth,
|
||||
**kwargs,
|
||||
)
|
||||
if not success and isinstance(result, RateLimitError):
|
||||
if result.provider is None:
|
||||
result.provider = "civitai_api"
|
||||
raise result
|
||||
return success, result
|
||||
|
||||
@staticmethod
|
||||
def _remove_comfy_metadata(model_version: Optional[Dict]) -> None:
|
||||
"""Remove Comfy-specific metadata from model version images."""
|
||||
@@ -79,8 +103,7 @@ class CivitaiClient:
|
||||
|
||||
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
success, result = await downloader.make_request(
|
||||
success, result = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/model-versions/by-hash/{model_hash}",
|
||||
use_auth=True
|
||||
@@ -90,7 +113,7 @@ class CivitaiClient:
|
||||
model_id = result.get('modelId')
|
||||
if model_id:
|
||||
# Fetch additional model metadata
|
||||
success_model, data = await downloader.make_request(
|
||||
success_model, data = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True
|
||||
@@ -113,6 +136,8 @@ class CivitaiClient:
|
||||
# Other error cases
|
||||
logger.error(f"Failed to fetch model info for {model_hash[:10]}: {result}")
|
||||
return None, str(result)
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"API Error: {str(e)}")
|
||||
return None, str(e)
|
||||
@@ -138,8 +163,7 @@ class CivitaiClient:
|
||||
async def get_model_versions(self, model_id: str) -> List[Dict]:
|
||||
"""Get all versions of a model with local availability info"""
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
success, result = await downloader.make_request(
|
||||
success, result = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True
|
||||
@@ -152,146 +176,167 @@ class CivitaiClient:
|
||||
'name': result.get('name', '')
|
||||
}
|
||||
return None
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching model versions: {e}")
|
||||
return None
|
||||
|
||||
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
|
||||
"""Get specific model version with additional metadata
|
||||
|
||||
Args:
|
||||
model_id: The Civitai model ID (optional if version_id is provided)
|
||||
version_id: Optional specific version ID to retrieve
|
||||
|
||||
Returns:
|
||||
Optional[Dict]: The model version data with additional fields or None if not found
|
||||
"""
|
||||
"""Get specific model version with additional metadata."""
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
|
||||
# Case 1: Only version_id is provided
|
||||
if model_id is None and version_id is not None:
|
||||
# First get the version info to extract model_id
|
||||
success, version = await downloader.make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/model-versions/{version_id}",
|
||||
use_auth=True
|
||||
)
|
||||
if not success:
|
||||
return None
|
||||
|
||||
model_id = version.get('modelId')
|
||||
if not model_id:
|
||||
logger.error(f"No modelId found in version {version_id}")
|
||||
return None
|
||||
|
||||
# Now get the model data for additional metadata
|
||||
success, model_data = await downloader.make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True
|
||||
)
|
||||
if success:
|
||||
# Enrich version with model data
|
||||
version['model']['description'] = model_data.get("description")
|
||||
version['model']['tags'] = model_data.get("tags", [])
|
||||
version['creator'] = model_data.get("creator")
|
||||
return await self._get_version_by_id_only(version_id)
|
||||
|
||||
self._remove_comfy_metadata(version)
|
||||
return version
|
||||
|
||||
# Case 2: model_id is provided (with or without version_id)
|
||||
elif model_id is not None:
|
||||
# Step 1: Get model data to find version_id if not provided and get additional metadata
|
||||
success, data = await downloader.make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True
|
||||
)
|
||||
if not success:
|
||||
return None
|
||||
if model_id is not None:
|
||||
return await self._get_version_with_model_id(model_id, version_id)
|
||||
|
||||
model_versions = data.get('modelVersions', [])
|
||||
if not model_versions:
|
||||
logger.warning(f"No model versions found for model {model_id}")
|
||||
return None
|
||||
logger.error("Either model_id or version_id must be provided")
|
||||
return None
|
||||
|
||||
# Step 2: Determine the target version entry to use
|
||||
target_version = None
|
||||
if version_id is not None:
|
||||
target_version = next(
|
||||
(item for item in model_versions if item.get('id') == version_id),
|
||||
None
|
||||
)
|
||||
if target_version is None:
|
||||
logger.warning(
|
||||
f"Version {version_id} not found for model {model_id}, defaulting to first version"
|
||||
)
|
||||
if target_version is None:
|
||||
target_version = model_versions[0]
|
||||
|
||||
target_version_id = target_version.get('id')
|
||||
|
||||
# Step 3: Get detailed version info using the SHA256 hash
|
||||
model_hash = None
|
||||
for file_info in target_version.get('files', []):
|
||||
if file_info.get('type') == 'Model' and file_info.get('primary'):
|
||||
model_hash = file_info.get('hashes', {}).get('SHA256')
|
||||
if model_hash:
|
||||
break
|
||||
|
||||
version = None
|
||||
if model_hash:
|
||||
success, version = await downloader.make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/model-versions/by-hash/{model_hash}",
|
||||
use_auth=True
|
||||
)
|
||||
if not success:
|
||||
logger.warning(
|
||||
f"Failed to fetch version by hash for model {model_id} version {target_version_id}: {version}"
|
||||
)
|
||||
version = None
|
||||
else:
|
||||
logger.warning(
|
||||
f"No primary model hash found for model {model_id} version {target_version_id}"
|
||||
)
|
||||
|
||||
if version is None:
|
||||
version = copy.deepcopy(target_version)
|
||||
version.pop('index', None)
|
||||
version['modelId'] = model_id
|
||||
version['model'] = {
|
||||
'name': data.get('name'),
|
||||
'type': data.get('type'),
|
||||
'nsfw': data.get('nsfw'),
|
||||
'poi': data.get('poi')
|
||||
}
|
||||
|
||||
# Step 4: Enrich version_info with model data
|
||||
# Add description and tags from model data
|
||||
model_info = version.get('model')
|
||||
if not isinstance(model_info, dict):
|
||||
model_info = {}
|
||||
version['model'] = model_info
|
||||
model_info['description'] = data.get("description")
|
||||
model_info['tags'] = data.get("tags", [])
|
||||
|
||||
# Add creator from model data
|
||||
version['creator'] = data.get("creator")
|
||||
|
||||
self._remove_comfy_metadata(version)
|
||||
return version
|
||||
|
||||
# Case 3: Neither model_id nor version_id provided
|
||||
else:
|
||||
logger.error("Either model_id or version_id must be provided")
|
||||
return None
|
||||
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching model version: {e}")
|
||||
return None
|
||||
|
||||
async def _get_version_by_id_only(self, version_id: int) -> Optional[Dict]:
|
||||
version = await self._fetch_version_by_id(version_id)
|
||||
if version is None:
|
||||
return None
|
||||
|
||||
model_id = version.get('modelId')
|
||||
if not model_id:
|
||||
logger.error(f"No modelId found in version {version_id}")
|
||||
return None
|
||||
|
||||
model_data = await self._fetch_model_data(model_id)
|
||||
if model_data:
|
||||
self._enrich_version_with_model_data(version, model_data)
|
||||
|
||||
self._remove_comfy_metadata(version)
|
||||
return version
|
||||
|
||||
async def _get_version_with_model_id(self, model_id: int, version_id: Optional[int]) -> Optional[Dict]:
|
||||
model_data = await self._fetch_model_data(model_id)
|
||||
if not model_data:
|
||||
return None
|
||||
|
||||
target_version = self._select_target_version(model_data, model_id, version_id)
|
||||
if target_version is None:
|
||||
return None
|
||||
|
||||
target_version_id = target_version.get('id')
|
||||
version = await self._fetch_version_by_id(target_version_id) if target_version_id else None
|
||||
|
||||
if version is None:
|
||||
model_hash = self._extract_primary_model_hash(target_version)
|
||||
if model_hash:
|
||||
version = await self._fetch_version_by_hash(model_hash)
|
||||
else:
|
||||
logger.warning(
|
||||
f"No primary model hash found for model {model_id} version {target_version_id}"
|
||||
)
|
||||
|
||||
if version is None:
|
||||
version = self._build_version_from_model_data(target_version, model_id, model_data)
|
||||
|
||||
self._enrich_version_with_model_data(version, model_data)
|
||||
self._remove_comfy_metadata(version)
|
||||
return version
|
||||
|
||||
async def _fetch_model_data(self, model_id: int) -> Optional[Dict]:
|
||||
success, data = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True
|
||||
)
|
||||
if success:
|
||||
return data
|
||||
logger.warning(f"Failed to fetch model data for model {model_id}")
|
||||
return None
|
||||
|
||||
async def _fetch_version_by_id(self, version_id: Optional[int]) -> Optional[Dict]:
|
||||
if version_id is None:
|
||||
return None
|
||||
|
||||
success, version = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/model-versions/{version_id}",
|
||||
use_auth=True
|
||||
)
|
||||
if success:
|
||||
return version
|
||||
|
||||
logger.warning(f"Failed to fetch version by id {version_id}")
|
||||
return None
|
||||
|
||||
async def _fetch_version_by_hash(self, model_hash: Optional[str]) -> Optional[Dict]:
|
||||
if not model_hash:
|
||||
return None
|
||||
|
||||
success, version = await self._make_request(
|
||||
'GET',
|
||||
f"{self.base_url}/model-versions/by-hash/{model_hash}",
|
||||
use_auth=True
|
||||
)
|
||||
if success:
|
||||
return version
|
||||
|
||||
logger.warning(f"Failed to fetch version by hash {model_hash}")
|
||||
return None
|
||||
|
||||
def _select_target_version(self, model_data: Dict, model_id: int, version_id: Optional[int]) -> Optional[Dict]:
|
||||
model_versions = model_data.get('modelVersions', [])
|
||||
if not model_versions:
|
||||
logger.warning(f"No model versions found for model {model_id}")
|
||||
return None
|
||||
|
||||
if version_id is not None:
|
||||
target_version = next(
|
||||
(item for item in model_versions if item.get('id') == version_id),
|
||||
None
|
||||
)
|
||||
if target_version is None:
|
||||
logger.warning(
|
||||
f"Version {version_id} not found for model {model_id}, defaulting to first version"
|
||||
)
|
||||
return model_versions[0]
|
||||
return target_version
|
||||
|
||||
return model_versions[0]
|
||||
|
||||
def _extract_primary_model_hash(self, version_entry: Dict) -> Optional[str]:
|
||||
for file_info in version_entry.get('files', []):
|
||||
if file_info.get('type') == 'Model' and file_info.get('primary'):
|
||||
hashes = file_info.get('hashes', {})
|
||||
model_hash = hashes.get('SHA256')
|
||||
if model_hash:
|
||||
return model_hash
|
||||
return None
|
||||
|
||||
def _build_version_from_model_data(self, version_entry: Dict, model_id: int, model_data: Dict) -> Dict:
|
||||
version = copy.deepcopy(version_entry)
|
||||
version.pop('index', None)
|
||||
version['modelId'] = model_id
|
||||
version['model'] = {
|
||||
'name': model_data.get('name'),
|
||||
'type': model_data.get('type'),
|
||||
'nsfw': model_data.get('nsfw'),
|
||||
'poi': model_data.get('poi')
|
||||
}
|
||||
return version
|
||||
|
||||
def _enrich_version_with_model_data(self, version: Dict, model_data: Dict) -> None:
|
||||
model_info = version.get('model')
|
||||
if not isinstance(model_info, dict):
|
||||
model_info = {}
|
||||
version['model'] = model_info
|
||||
|
||||
model_info['description'] = model_data.get("description")
|
||||
model_info['tags'] = model_data.get("tags", [])
|
||||
version['creator'] = model_data.get("creator")
|
||||
|
||||
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
"""Fetch model version metadata from Civitai
|
||||
|
||||
@@ -304,11 +349,10 @@ class CivitaiClient:
|
||||
- An error message if there was an error, or None on success
|
||||
"""
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
url = f"{self.base_url}/model-versions/{version_id}"
|
||||
|
||||
logger.debug(f"Resolving DNS for model version info: {url}")
|
||||
success, result = await downloader.make_request(
|
||||
success, result = await self._make_request(
|
||||
'GET',
|
||||
url,
|
||||
use_auth=True
|
||||
@@ -328,6 +372,8 @@ class CivitaiClient:
|
||||
# Other error cases
|
||||
logger.error(f"Failed to fetch model info for {version_id}: {result}")
|
||||
return None, str(result)
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
error_msg = f"Error fetching model version info: {e}"
|
||||
logger.error(error_msg)
|
||||
@@ -335,7 +381,7 @@ class CivitaiClient:
|
||||
|
||||
async def get_image_info(self, image_id: str) -> Optional[Dict]:
|
||||
"""Fetch image information from Civitai API
|
||||
|
||||
|
||||
Args:
|
||||
image_id: The Civitai image ID
|
||||
|
||||
@@ -343,11 +389,10 @@ class CivitaiClient:
|
||||
Optional[Dict]: The image data or None if not found
|
||||
"""
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
url = f"{self.base_url}/images?imageId={image_id}&nsfw=X"
|
||||
|
||||
logger.debug(f"Fetching image info for ID: {image_id}")
|
||||
success, result = await downloader.make_request(
|
||||
success, result = await self._make_request(
|
||||
'GET',
|
||||
url,
|
||||
use_auth=True
|
||||
@@ -362,7 +407,44 @@ class CivitaiClient:
|
||||
|
||||
logger.error(f"Failed to fetch image info for ID: {image_id}: {result}")
|
||||
return None
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as e:
|
||||
error_msg = f"Error fetching image info: {e}"
|
||||
logger.error(error_msg)
|
||||
return None
|
||||
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
"""Fetch all models for a specific Civitai user."""
|
||||
if not username:
|
||||
return None
|
||||
|
||||
try:
|
||||
url = f"{self.base_url}/models?username={username}"
|
||||
success, result = await self._make_request(
|
||||
'GET',
|
||||
url,
|
||||
use_auth=True
|
||||
)
|
||||
|
||||
if not success:
|
||||
logger.error("Failed to fetch models for %s: %s", username, result)
|
||||
return None
|
||||
|
||||
items = result.get("items") if isinstance(result, dict) else None
|
||||
if not isinstance(items, list):
|
||||
return []
|
||||
|
||||
for model in items:
|
||||
versions = model.get("modelVersions")
|
||||
if not isinstance(versions, list):
|
||||
continue
|
||||
for version in versions:
|
||||
self._remove_comfy_metadata(version)
|
||||
|
||||
return items
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.error("Error fetching models for %s: %s", username, exc)
|
||||
return None
|
||||
|
||||
@@ -5,6 +5,8 @@ from __future__ import annotations
|
||||
import logging
|
||||
from typing import Any, Awaitable, Callable, Dict, Optional
|
||||
|
||||
from .downloader import DownloadProgress
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -29,14 +31,40 @@ class DownloadCoordinator:
|
||||
download_id = payload.get("download_id") or self._ws_manager.generate_download_id()
|
||||
payload.setdefault("download_id", download_id)
|
||||
|
||||
async def progress_callback(progress: Any) -> None:
|
||||
async def progress_callback(progress: Any, snapshot: Optional[DownloadProgress] = None) -> None:
|
||||
percent = 0.0
|
||||
metrics: Optional[DownloadProgress] = None
|
||||
|
||||
if isinstance(progress, DownloadProgress):
|
||||
metrics = progress
|
||||
percent = progress.percent_complete
|
||||
elif isinstance(snapshot, DownloadProgress):
|
||||
metrics = snapshot
|
||||
percent = snapshot.percent_complete
|
||||
else:
|
||||
try:
|
||||
percent = float(progress)
|
||||
except (TypeError, ValueError):
|
||||
percent = 0.0
|
||||
|
||||
payload: Dict[str, Any] = {
|
||||
"status": "progress",
|
||||
"progress": round(percent),
|
||||
"download_id": download_id,
|
||||
}
|
||||
|
||||
if metrics is not None:
|
||||
payload.update(
|
||||
{
|
||||
"bytes_downloaded": metrics.bytes_downloaded,
|
||||
"total_bytes": metrics.total_bytes,
|
||||
"bytes_per_second": metrics.bytes_per_second,
|
||||
}
|
||||
)
|
||||
|
||||
await self._ws_manager.broadcast_download_progress(
|
||||
download_id,
|
||||
{
|
||||
"status": "progress",
|
||||
"progress": progress,
|
||||
"download_id": download_id,
|
||||
},
|
||||
payload,
|
||||
)
|
||||
|
||||
model_id = self._parse_optional_int(payload.get("model_id"), "model_id")
|
||||
@@ -81,6 +109,56 @@ class DownloadCoordinator:
|
||||
|
||||
return result
|
||||
|
||||
async def pause_download(self, download_id: str) -> Dict[str, Any]:
|
||||
"""Pause an active download and notify listeners."""
|
||||
|
||||
download_manager = await self._download_manager_factory()
|
||||
result = await download_manager.pause_download(download_id)
|
||||
|
||||
if result.get("success"):
|
||||
cached_progress = self._ws_manager.get_download_progress(download_id) or {}
|
||||
payload: Dict[str, Any] = {
|
||||
"status": "paused",
|
||||
"progress": cached_progress.get("progress", 0),
|
||||
"download_id": download_id,
|
||||
"message": "Download paused by user",
|
||||
}
|
||||
|
||||
for field in ("bytes_downloaded", "total_bytes", "bytes_per_second"):
|
||||
if field in cached_progress:
|
||||
payload[field] = cached_progress[field]
|
||||
|
||||
payload["bytes_per_second"] = 0.0
|
||||
|
||||
await self._ws_manager.broadcast_download_progress(download_id, payload)
|
||||
|
||||
return result
|
||||
|
||||
async def resume_download(self, download_id: str) -> Dict[str, Any]:
|
||||
"""Resume a paused download and notify listeners."""
|
||||
|
||||
download_manager = await self._download_manager_factory()
|
||||
result = await download_manager.resume_download(download_id)
|
||||
|
||||
if result.get("success"):
|
||||
cached_progress = self._ws_manager.get_download_progress(download_id) or {}
|
||||
payload: Dict[str, Any] = {
|
||||
"status": "downloading",
|
||||
"progress": cached_progress.get("progress", 0),
|
||||
"download_id": download_id,
|
||||
"message": "Download resumed by user",
|
||||
}
|
||||
|
||||
for field in ("bytes_downloaded", "total_bytes"):
|
||||
if field in cached_progress:
|
||||
payload[field] = cached_progress[field]
|
||||
|
||||
payload["bytes_per_second"] = cached_progress.get("bytes_per_second", 0.0)
|
||||
|
||||
await self._ws_manager.broadcast_download_progress(download_id, payload)
|
||||
|
||||
return result
|
||||
|
||||
async def list_active_downloads(self) -> Dict[str, Any]:
|
||||
"""Return the active download map from the underlying manager."""
|
||||
|
||||
|
||||
@@ -1,17 +1,20 @@
|
||||
import logging
|
||||
import os
|
||||
import asyncio
|
||||
import inspect
|
||||
from collections import OrderedDict
|
||||
import uuid
|
||||
from typing import Dict, List
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH, VALID_LORA_TYPES, CIVITAI_MODEL_TAGS
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH, VALID_LORA_TYPES
|
||||
from ..utils.civitai_utils import rewrite_preview_url
|
||||
from ..utils.exif_utils import ExifUtils
|
||||
from ..utils.metadata_manager import MetadataManager
|
||||
from .service_registry import ServiceRegistry
|
||||
from .settings_manager import settings
|
||||
from .settings_manager import get_settings_manager
|
||||
from .metadata_service import get_default_metadata_provider
|
||||
from .downloader import get_downloader
|
||||
from .downloader import get_downloader, DownloadProgress
|
||||
|
||||
# Download to temporary file first
|
||||
import tempfile
|
||||
@@ -40,6 +43,7 @@ class DownloadManager:
|
||||
self._active_downloads = OrderedDict() # download_id -> download_info
|
||||
self._download_semaphore = asyncio.Semaphore(5) # Limit concurrent downloads
|
||||
self._download_tasks = {} # download_id -> asyncio.Task
|
||||
self._pause_events: Dict[str, asyncio.Event] = {}
|
||||
|
||||
async def _get_lora_scanner(self):
|
||||
"""Get the lora scanner from registry"""
|
||||
@@ -80,13 +84,20 @@ class DownloadManager:
|
||||
'model_id': model_id,
|
||||
'model_version_id': model_version_id,
|
||||
'progress': 0,
|
||||
'status': 'queued'
|
||||
'status': 'queued',
|
||||
'bytes_downloaded': 0,
|
||||
'total_bytes': None,
|
||||
'bytes_per_second': 0.0,
|
||||
}
|
||||
|
||||
pause_event = asyncio.Event()
|
||||
pause_event.set()
|
||||
self._pause_events[task_id] = pause_event
|
||||
|
||||
# Create tracking task
|
||||
download_task = asyncio.create_task(
|
||||
self._download_with_semaphore(
|
||||
task_id, model_id, model_version_id, save_dir,
|
||||
task_id, model_id, model_version_id, save_dir,
|
||||
relative_path, progress_callback, use_default_paths, source
|
||||
)
|
||||
)
|
||||
@@ -105,9 +116,10 @@ class DownloadManager:
|
||||
# Clean up task reference
|
||||
if task_id in self._download_tasks:
|
||||
del self._download_tasks[task_id]
|
||||
self._pause_events.pop(task_id, None)
|
||||
|
||||
async def _download_with_semaphore(self, task_id: str, model_id: int, model_version_id: int,
|
||||
save_dir: str, relative_path: str,
|
||||
save_dir: str, relative_path: str,
|
||||
progress_callback=None, use_default_paths: bool = False,
|
||||
source: str = None):
|
||||
"""Execute download with semaphore to limit concurrency"""
|
||||
@@ -117,15 +129,30 @@ class DownloadManager:
|
||||
|
||||
# Wrap progress callback to track progress in active_downloads
|
||||
original_callback = progress_callback
|
||||
async def tracking_callback(progress):
|
||||
async def tracking_callback(progress, metrics=None):
|
||||
progress_value, snapshot = self._normalize_progress(progress, metrics)
|
||||
|
||||
if task_id in self._active_downloads:
|
||||
self._active_downloads[task_id]['progress'] = progress
|
||||
info = self._active_downloads[task_id]
|
||||
info['progress'] = round(progress_value)
|
||||
if snapshot is not None:
|
||||
info['bytes_downloaded'] = snapshot.bytes_downloaded
|
||||
info['total_bytes'] = snapshot.total_bytes
|
||||
info['bytes_per_second'] = snapshot.bytes_per_second
|
||||
|
||||
if original_callback:
|
||||
await original_callback(progress)
|
||||
await self._dispatch_progress(original_callback, snapshot, progress_value)
|
||||
|
||||
# Acquire semaphore to limit concurrent downloads
|
||||
try:
|
||||
async with self._download_semaphore:
|
||||
pause_event = self._pause_events.get(task_id)
|
||||
if pause_event is not None and not pause_event.is_set():
|
||||
if task_id in self._active_downloads:
|
||||
self._active_downloads[task_id]['status'] = 'paused'
|
||||
self._active_downloads[task_id]['bytes_per_second'] = 0.0
|
||||
await pause_event.wait()
|
||||
|
||||
# Update status to downloading
|
||||
if task_id in self._active_downloads:
|
||||
self._active_downloads[task_id]['status'] = 'downloading'
|
||||
@@ -147,12 +174,14 @@ class DownloadManager:
|
||||
self._active_downloads[task_id]['status'] = 'completed' if result['success'] else 'failed'
|
||||
if not result['success']:
|
||||
self._active_downloads[task_id]['error'] = result.get('error', 'Unknown error')
|
||||
self._active_downloads[task_id]['bytes_per_second'] = 0.0
|
||||
|
||||
return result
|
||||
except asyncio.CancelledError:
|
||||
# Handle cancellation
|
||||
if task_id in self._active_downloads:
|
||||
self._active_downloads[task_id]['status'] = 'cancelled'
|
||||
self._active_downloads[task_id]['bytes_per_second'] = 0.0
|
||||
logger.info(f"Download cancelled for task {task_id}")
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -161,6 +190,7 @@ class DownloadManager:
|
||||
if task_id in self._active_downloads:
|
||||
self._active_downloads[task_id]['status'] = 'failed'
|
||||
self._active_downloads[task_id]['error'] = str(e)
|
||||
self._active_downloads[task_id]['bytes_per_second'] = 0.0
|
||||
return {'success': False, 'error': str(e)}
|
||||
finally:
|
||||
# Schedule cleanup of download record after delay
|
||||
@@ -172,9 +202,17 @@ class DownloadManager:
|
||||
if task_id in self._active_downloads:
|
||||
del self._active_downloads[task_id]
|
||||
|
||||
async def _execute_original_download(self, model_id, model_version_id, save_dir,
|
||||
relative_path, progress_callback, use_default_paths,
|
||||
download_id=None, source=None):
|
||||
async def _execute_original_download(
|
||||
self,
|
||||
model_id,
|
||||
model_version_id,
|
||||
save_dir,
|
||||
relative_path,
|
||||
progress_callback,
|
||||
use_default_paths,
|
||||
download_id=None,
|
||||
source=None,
|
||||
):
|
||||
"""Wrapper for original download_from_civitai implementation"""
|
||||
try:
|
||||
# Check if model version already exists in library
|
||||
@@ -195,13 +233,8 @@ class DownloadManager:
|
||||
# Check embedding scanner
|
||||
if await embedding_scanner.check_model_version_exists(model_version_id):
|
||||
return {'success': False, 'error': 'Model version already exists in embedding library'}
|
||||
|
||||
# Get metadata provider based on source parameter
|
||||
if source == 'civarchive':
|
||||
from .metadata_service import get_metadata_provider
|
||||
metadata_provider = await get_metadata_provider('civarchive')
|
||||
else:
|
||||
metadata_provider = await get_default_metadata_provider()
|
||||
|
||||
metadata_provider = await get_default_metadata_provider()
|
||||
|
||||
# Get version info based on the provided identifier
|
||||
version_info = await metadata_provider.get_model_version(model_id, model_version_id)
|
||||
@@ -241,23 +274,24 @@ class DownloadManager:
|
||||
|
||||
# Handle use_default_paths
|
||||
if use_default_paths:
|
||||
settings_manager = get_settings_manager()
|
||||
# Set save_dir based on model type
|
||||
if model_type == 'checkpoint':
|
||||
default_path = settings.get('default_checkpoint_root')
|
||||
default_path = settings_manager.get('default_checkpoint_root')
|
||||
if not default_path:
|
||||
return {'success': False, 'error': 'Default checkpoint root path not set in settings'}
|
||||
save_dir = default_path
|
||||
elif model_type == 'lora':
|
||||
default_path = settings.get('default_lora_root')
|
||||
default_path = settings_manager.get('default_lora_root')
|
||||
if not default_path:
|
||||
return {'success': False, 'error': 'Default lora root path not set in settings'}
|
||||
save_dir = default_path
|
||||
elif model_type == 'embedding':
|
||||
default_path = settings.get('default_embedding_root')
|
||||
default_path = settings_manager.get('default_embedding_root')
|
||||
if not default_path:
|
||||
return {'success': False, 'error': 'Default embedding root path not set in settings'}
|
||||
save_dir = default_path
|
||||
|
||||
|
||||
# Calculate relative path using template
|
||||
relative_path = self._calculate_relative_path(version_info, model_type)
|
||||
|
||||
@@ -291,7 +325,7 @@ class DownloadManager:
|
||||
await progress_callback(0)
|
||||
|
||||
# 2. Get file information
|
||||
file_info = next((f for f in version_info.get('files', []) if f.get('primary')), None)
|
||||
file_info = next((f for f in version_info.get('files', []) if f.get('primary') and f.get('type') == 'Model'), None)
|
||||
if not file_info:
|
||||
return {'success': False, 'error': 'No primary file found in metadata'}
|
||||
mirrors = file_info.get('mirrors') or []
|
||||
@@ -306,7 +340,7 @@ class DownloadManager:
|
||||
download_urls.append(download_url)
|
||||
|
||||
if not download_urls:
|
||||
return {'success': False, 'error': 'No download URL found for primary file'}
|
||||
return {'success': False, 'error': 'No mirror URL found'}
|
||||
|
||||
# 3. Prepare download
|
||||
file_name = file_info['name']
|
||||
@@ -332,7 +366,7 @@ class DownloadManager:
|
||||
relative_path=relative_path,
|
||||
progress_callback=progress_callback,
|
||||
model_type=model_type,
|
||||
download_id=download_id
|
||||
download_id=download_id,
|
||||
)
|
||||
|
||||
# If early_access_msg exists and download failed, replace error message
|
||||
@@ -360,7 +394,8 @@ class DownloadManager:
|
||||
Relative path string
|
||||
"""
|
||||
# Get path template from settings for specific model type
|
||||
path_template = settings.get_download_path_template(model_type)
|
||||
settings_manager = get_settings_manager()
|
||||
path_template = settings_manager.get_download_path_template(model_type)
|
||||
|
||||
# If template is empty, return empty path (flat structure)
|
||||
if not path_template:
|
||||
@@ -377,23 +412,14 @@ class DownloadManager:
|
||||
author = 'Anonymous'
|
||||
|
||||
# Apply mapping if available
|
||||
base_model_mappings = settings.get('base_model_path_mappings', {})
|
||||
base_model_mappings = settings_manager.get('base_model_path_mappings', {})
|
||||
mapped_base_model = base_model_mappings.get(base_model, base_model)
|
||||
|
||||
# Get model tags
|
||||
model_tags = version_info.get('model', {}).get('tags', [])
|
||||
|
||||
# Find the first Civitai model tag that exists in model_tags
|
||||
first_tag = ''
|
||||
for civitai_tag in CIVITAI_MODEL_TAGS:
|
||||
if civitai_tag in model_tags:
|
||||
first_tag = civitai_tag
|
||||
break
|
||||
|
||||
# If no Civitai model tag found, fallback to first tag
|
||||
if not first_tag and model_tags:
|
||||
first_tag = model_tags[0]
|
||||
|
||||
|
||||
first_tag = settings_manager.resolve_priority_tag_for_model(model_tags, model_type)
|
||||
|
||||
# Format the template with available data
|
||||
formatted_path = path_template
|
||||
formatted_path = formatted_path.replace('{base_model}', mapped_base_model)
|
||||
@@ -405,10 +431,17 @@ class DownloadManager:
|
||||
|
||||
return formatted_path
|
||||
|
||||
async def _execute_download(self, download_urls: List[str], save_dir: str,
|
||||
metadata, version_info: Dict,
|
||||
relative_path: str, progress_callback=None,
|
||||
model_type: str = "lora", download_id: str = None) -> Dict:
|
||||
async def _execute_download(
|
||||
self,
|
||||
download_urls: List[str],
|
||||
save_dir: str,
|
||||
metadata,
|
||||
version_info: Dict,
|
||||
relative_path: str,
|
||||
progress_callback=None,
|
||||
model_type: str = "lora",
|
||||
download_id: str = None,
|
||||
) -> Dict:
|
||||
"""Execute the actual download process including preview images and model files"""
|
||||
try:
|
||||
# Extract original filename details
|
||||
@@ -439,6 +472,8 @@ class DownloadManager:
|
||||
|
||||
part_path = save_path + '.part'
|
||||
metadata_path = os.path.splitext(save_path)[0] + '.metadata.json'
|
||||
|
||||
pause_event = self._pause_events.get(download_id) if download_id else None
|
||||
|
||||
# Store file paths in active_downloads for potential cleanup
|
||||
if download_id and download_id in self._active_downloads:
|
||||
@@ -448,70 +483,103 @@ class DownloadManager:
|
||||
# Download preview image if available
|
||||
images = version_info.get('images', [])
|
||||
if images:
|
||||
# Report preview download progress
|
||||
if progress_callback:
|
||||
await progress_callback(1) # 1% progress for starting preview download
|
||||
|
||||
# Check if it's a video or an image
|
||||
is_video = images[0].get('type') == 'video'
|
||||
|
||||
if (is_video):
|
||||
# For videos, use .mp4 extension
|
||||
preview_ext = '.mp4'
|
||||
preview_path = os.path.splitext(save_path)[0] + preview_ext
|
||||
|
||||
# Download video directly using downloader
|
||||
downloader = await get_downloader()
|
||||
success, result = await downloader.download_file(
|
||||
images[0]['url'],
|
||||
preview_path,
|
||||
use_auth=False # Preview images typically don't need auth
|
||||
)
|
||||
if success:
|
||||
metadata.preview_url = preview_path.replace(os.sep, '/')
|
||||
metadata.preview_nsfw_level = images[0].get('nsfwLevel', 0)
|
||||
else:
|
||||
# For images, use WebP format for better performance
|
||||
with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as temp_file:
|
||||
temp_path = temp_file.name
|
||||
|
||||
# Download the original image to temp path using downloader
|
||||
downloader = await get_downloader()
|
||||
success, content, headers = await downloader.download_to_memory(
|
||||
images[0]['url'],
|
||||
use_auth=False
|
||||
)
|
||||
if success:
|
||||
# Save to temp file
|
||||
with open(temp_path, 'wb') as f:
|
||||
f.write(content)
|
||||
# Optimize and convert to WebP
|
||||
preview_path = os.path.splitext(save_path)[0] + '.webp'
|
||||
|
||||
# Use ExifUtils to optimize and convert the image
|
||||
optimized_data, _ = ExifUtils.optimize_image(
|
||||
image_data=temp_path,
|
||||
target_width=CARD_PREVIEW_WIDTH,
|
||||
format='webp',
|
||||
quality=85,
|
||||
preserve_metadata=False
|
||||
)
|
||||
|
||||
# Save the optimized image
|
||||
with open(preview_path, 'wb') as f:
|
||||
f.write(optimized_data)
|
||||
|
||||
# Update metadata
|
||||
metadata.preview_url = preview_path.replace(os.sep, '/')
|
||||
metadata.preview_nsfw_level = images[0].get('nsfwLevel', 0)
|
||||
|
||||
# Remove temporary file
|
||||
try:
|
||||
os.unlink(temp_path)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to delete temp file: {e}")
|
||||
first_image = images[0] if isinstance(images[0], dict) else None
|
||||
preview_url = first_image.get('url') if first_image else None
|
||||
media_type = (first_image.get('type') or '').lower() if first_image else ''
|
||||
nsfw_level = first_image.get('nsfwLevel', 0) if first_image else 0
|
||||
|
||||
def _extension_from_url(url: str, fallback: str) -> str:
|
||||
try:
|
||||
parsed = urlparse(url)
|
||||
except ValueError:
|
||||
return fallback
|
||||
ext = os.path.splitext(parsed.path)[1]
|
||||
return ext or fallback
|
||||
|
||||
preview_downloaded = False
|
||||
preview_path = None
|
||||
|
||||
if preview_url:
|
||||
downloader = await get_downloader()
|
||||
|
||||
if media_type == 'video':
|
||||
preview_ext = _extension_from_url(preview_url, '.mp4')
|
||||
preview_path = os.path.splitext(save_path)[0] + preview_ext
|
||||
rewritten_url, rewritten = rewrite_preview_url(preview_url, media_type='video')
|
||||
attempt_urls: List[str] = []
|
||||
if rewritten:
|
||||
attempt_urls.append(rewritten_url)
|
||||
attempt_urls.append(preview_url)
|
||||
|
||||
seen_attempts = set()
|
||||
for attempt in attempt_urls:
|
||||
if not attempt or attempt in seen_attempts:
|
||||
continue
|
||||
seen_attempts.add(attempt)
|
||||
success, _ = await downloader.download_file(
|
||||
attempt,
|
||||
preview_path,
|
||||
use_auth=False
|
||||
)
|
||||
if success:
|
||||
preview_downloaded = True
|
||||
break
|
||||
else:
|
||||
rewritten_url, rewritten = rewrite_preview_url(preview_url, media_type='image')
|
||||
if rewritten:
|
||||
preview_ext = _extension_from_url(preview_url, '.png')
|
||||
preview_path = os.path.splitext(save_path)[0] + preview_ext
|
||||
success, _ = await downloader.download_file(
|
||||
rewritten_url,
|
||||
preview_path,
|
||||
use_auth=False
|
||||
)
|
||||
if success:
|
||||
preview_downloaded = True
|
||||
|
||||
if not preview_downloaded:
|
||||
temp_path: str | None = None
|
||||
try:
|
||||
with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as temp_file:
|
||||
temp_path = temp_file.name
|
||||
|
||||
success, content, _ = await downloader.download_to_memory(
|
||||
preview_url,
|
||||
use_auth=False
|
||||
)
|
||||
if success:
|
||||
with open(temp_path, 'wb') as temp_file_handle:
|
||||
temp_file_handle.write(content)
|
||||
preview_path = os.path.splitext(save_path)[0] + '.webp'
|
||||
|
||||
optimized_data, _ = ExifUtils.optimize_image(
|
||||
image_data=temp_path,
|
||||
target_width=CARD_PREVIEW_WIDTH,
|
||||
format='webp',
|
||||
quality=85,
|
||||
preserve_metadata=False
|
||||
)
|
||||
|
||||
with open(preview_path, 'wb') as preview_file:
|
||||
preview_file.write(optimized_data)
|
||||
|
||||
preview_downloaded = True
|
||||
finally:
|
||||
if temp_path and os.path.exists(temp_path):
|
||||
try:
|
||||
os.unlink(temp_path)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to delete temp file: {e}")
|
||||
|
||||
if preview_downloaded and preview_path:
|
||||
metadata.preview_url = preview_path.replace(os.sep, '/')
|
||||
metadata.preview_nsfw_level = nsfw_level
|
||||
if download_id and download_id in self._active_downloads:
|
||||
self._active_downloads[download_id]['preview_path'] = preview_path
|
||||
|
||||
# Report preview download completion
|
||||
if progress_callback:
|
||||
await progress_callback(3) # 3% progress after preview download
|
||||
|
||||
@@ -520,11 +588,22 @@ class DownloadManager:
|
||||
last_error = None
|
||||
for download_url in download_urls:
|
||||
use_auth = download_url.startswith("https://civitai.com/api/download/")
|
||||
download_kwargs = {
|
||||
"progress_callback": lambda progress, snapshot=None: self._handle_download_progress(
|
||||
progress,
|
||||
progress_callback,
|
||||
snapshot,
|
||||
),
|
||||
"use_auth": use_auth, # Only use authentication for Civitai downloads
|
||||
}
|
||||
|
||||
if pause_event is not None:
|
||||
download_kwargs["pause_event"] = pause_event
|
||||
|
||||
success, result = await downloader.download_file(
|
||||
download_url,
|
||||
save_path, # Use full path instead of separate dir and filename
|
||||
progress_callback=lambda p: self._handle_download_progress(p, progress_callback),
|
||||
use_auth=use_auth # Only use authentication for Civitai downloads
|
||||
**download_kwargs,
|
||||
)
|
||||
|
||||
if success:
|
||||
@@ -603,21 +682,37 @@ class DownloadManager:
|
||||
|
||||
return {'success': False, 'error': str(e)}
|
||||
|
||||
async def _handle_download_progress(self, file_progress: float, progress_callback):
|
||||
"""Convert file download progress to overall progress
|
||||
|
||||
Args:
|
||||
file_progress: Progress of file download (0-100)
|
||||
progress_callback: Callback function for progress updates
|
||||
"""
|
||||
if progress_callback:
|
||||
# Scale file progress to 3-100 range (after preview download)
|
||||
overall_progress = 3 + (file_progress * 0.97) # 97% of progress for file download
|
||||
await progress_callback(round(overall_progress))
|
||||
async def _handle_download_progress(
|
||||
self,
|
||||
progress_update,
|
||||
progress_callback,
|
||||
snapshot=None,
|
||||
):
|
||||
"""Convert file download progress to overall progress."""
|
||||
|
||||
if not progress_callback:
|
||||
return
|
||||
|
||||
file_progress, original_snapshot = self._normalize_progress(progress_update, snapshot)
|
||||
overall_progress = 3 + (file_progress * 0.97)
|
||||
overall_progress = max(0.0, min(overall_progress, 100.0))
|
||||
rounded_progress = round(overall_progress)
|
||||
|
||||
normalized_snapshot: Optional[DownloadProgress] = None
|
||||
if original_snapshot is not None:
|
||||
normalized_snapshot = DownloadProgress(
|
||||
percent_complete=overall_progress,
|
||||
bytes_downloaded=original_snapshot.bytes_downloaded,
|
||||
total_bytes=original_snapshot.total_bytes,
|
||||
bytes_per_second=original_snapshot.bytes_per_second,
|
||||
timestamp=original_snapshot.timestamp,
|
||||
)
|
||||
|
||||
await self._dispatch_progress(progress_callback, normalized_snapshot, rounded_progress)
|
||||
|
||||
async def cancel_download(self, download_id: str) -> Dict:
|
||||
"""Cancel an active download by download_id
|
||||
|
||||
|
||||
Args:
|
||||
download_id: The unique identifier of the download task
|
||||
|
||||
@@ -631,10 +726,15 @@ class DownloadManager:
|
||||
# Get the task and cancel it
|
||||
task = self._download_tasks[download_id]
|
||||
task.cancel()
|
||||
|
||||
|
||||
pause_event = self._pause_events.get(download_id)
|
||||
if pause_event is not None:
|
||||
pause_event.set()
|
||||
|
||||
# Update status in active downloads
|
||||
if download_id in self._active_downloads:
|
||||
self._active_downloads[download_id]['status'] = 'cancelling'
|
||||
self._active_downloads[download_id]['bytes_per_second'] = 0.0
|
||||
|
||||
# Wait briefly for the task to acknowledge cancellation
|
||||
try:
|
||||
@@ -675,7 +775,15 @@ class DownloadManager:
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting metadata file: {e}")
|
||||
|
||||
# Delete preview file if exists (.webp or .mp4)
|
||||
preview_path_value = download_info.get('preview_path')
|
||||
if preview_path_value and os.path.exists(preview_path_value):
|
||||
try:
|
||||
os.unlink(preview_path_value)
|
||||
logger.debug(f"Deleted preview file: {preview_path_value}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting preview file: {e}")
|
||||
|
||||
# Delete preview file if exists (.webp or .mp4) for legacy paths
|
||||
for preview_ext in ['.webp', '.mp4']:
|
||||
preview_path = os.path.splitext(file_path)[0] + preview_ext
|
||||
if os.path.exists(preview_path):
|
||||
@@ -689,7 +797,99 @@ class DownloadManager:
|
||||
except Exception as e:
|
||||
logger.error(f"Error cancelling download: {e}", exc_info=True)
|
||||
return {'success': False, 'error': str(e)}
|
||||
|
||||
finally:
|
||||
self._pause_events.pop(download_id, None)
|
||||
|
||||
async def pause_download(self, download_id: str) -> Dict:
|
||||
"""Pause an active download without losing progress."""
|
||||
|
||||
if download_id not in self._download_tasks:
|
||||
return {'success': False, 'error': 'Download task not found'}
|
||||
|
||||
pause_event = self._pause_events.get(download_id)
|
||||
if pause_event is None:
|
||||
pause_event = asyncio.Event()
|
||||
pause_event.set()
|
||||
self._pause_events[download_id] = pause_event
|
||||
|
||||
if not pause_event.is_set():
|
||||
return {'success': False, 'error': 'Download is already paused'}
|
||||
|
||||
pause_event.clear()
|
||||
|
||||
download_info = self._active_downloads.get(download_id)
|
||||
if download_info is not None:
|
||||
download_info['status'] = 'paused'
|
||||
download_info['bytes_per_second'] = 0.0
|
||||
|
||||
return {'success': True, 'message': 'Download paused successfully'}
|
||||
|
||||
async def resume_download(self, download_id: str) -> Dict:
|
||||
"""Resume a previously paused download."""
|
||||
|
||||
pause_event = self._pause_events.get(download_id)
|
||||
if pause_event is None:
|
||||
return {'success': False, 'error': 'Download task not found'}
|
||||
|
||||
if pause_event.is_set():
|
||||
return {'success': False, 'error': 'Download is not paused'}
|
||||
|
||||
pause_event.set()
|
||||
|
||||
download_info = self._active_downloads.get(download_id)
|
||||
if download_info is not None:
|
||||
if download_info.get('status') == 'paused':
|
||||
download_info['status'] = 'downloading'
|
||||
download_info.setdefault('bytes_per_second', 0.0)
|
||||
|
||||
return {'success': True, 'message': 'Download resumed successfully'}
|
||||
|
||||
@staticmethod
|
||||
def _coerce_progress_value(progress) -> float:
|
||||
try:
|
||||
return float(progress)
|
||||
except (TypeError, ValueError):
|
||||
return 0.0
|
||||
|
||||
@classmethod
|
||||
def _normalize_progress(
|
||||
cls,
|
||||
progress,
|
||||
snapshot: Optional[DownloadProgress] = None,
|
||||
) -> Tuple[float, Optional[DownloadProgress]]:
|
||||
if isinstance(progress, DownloadProgress):
|
||||
return progress.percent_complete, progress
|
||||
|
||||
if isinstance(snapshot, DownloadProgress):
|
||||
return snapshot.percent_complete, snapshot
|
||||
|
||||
if isinstance(progress, dict):
|
||||
if 'percent_complete' in progress:
|
||||
return cls._coerce_progress_value(progress['percent_complete']), snapshot
|
||||
if 'progress' in progress:
|
||||
return cls._coerce_progress_value(progress['progress']), snapshot
|
||||
|
||||
return cls._coerce_progress_value(progress), None
|
||||
|
||||
async def _dispatch_progress(
|
||||
self,
|
||||
callback,
|
||||
snapshot: Optional[DownloadProgress],
|
||||
progress_value: float,
|
||||
) -> None:
|
||||
try:
|
||||
if snapshot is not None:
|
||||
result = callback(snapshot, snapshot)
|
||||
else:
|
||||
result = callback(progress_value)
|
||||
except TypeError:
|
||||
result = callback(progress_value)
|
||||
|
||||
if inspect.isawaitable(result):
|
||||
await result
|
||||
elif asyncio.iscoroutine(result):
|
||||
await result
|
||||
|
||||
async def get_active_downloads(self) -> Dict:
|
||||
"""Get information about all active downloads
|
||||
|
||||
@@ -704,8 +904,11 @@ class DownloadManager:
|
||||
'model_version_id': info.get('model_version_id'),
|
||||
'progress': info.get('progress', 0),
|
||||
'status': info.get('status', 'unknown'),
|
||||
'error': info.get('error', None)
|
||||
'error': info.get('error', None),
|
||||
'bytes_downloaded': info.get('bytes_downloaded', 0),
|
||||
'total_bytes': info.get('total_bytes'),
|
||||
'bytes_per_second': info.get('bytes_per_second', 0.0),
|
||||
}
|
||||
for task_id, info in self._active_downloads.items()
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,13 +14,28 @@ import os
|
||||
import logging
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict, Tuple, Callable, Union
|
||||
from ..services.settings_manager import settings
|
||||
from collections import deque
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
from email.utils import parsedate_to_datetime
|
||||
from typing import Optional, Dict, Tuple, Callable, Union, Awaitable
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class DownloadProgress:
|
||||
"""Snapshot of a download transfer at a moment in time."""
|
||||
|
||||
percent_complete: float
|
||||
bytes_downloaded: int
|
||||
total_bytes: Optional[int]
|
||||
bytes_per_second: float
|
||||
timestamp: float
|
||||
|
||||
|
||||
class Downloader:
|
||||
"""Unified downloader for all HTTP/HTTPS downloads in the application."""
|
||||
|
||||
@@ -94,12 +109,13 @@ class Downloader:
|
||||
|
||||
# Check for app-level proxy settings
|
||||
proxy_url = None
|
||||
if settings.get('proxy_enabled', False):
|
||||
proxy_host = settings.get('proxy_host', '').strip()
|
||||
proxy_port = settings.get('proxy_port', '').strip()
|
||||
proxy_type = settings.get('proxy_type', 'http').lower()
|
||||
proxy_username = settings.get('proxy_username', '').strip()
|
||||
proxy_password = settings.get('proxy_password', '').strip()
|
||||
settings_manager = get_settings_manager()
|
||||
if settings_manager.get('proxy_enabled', False):
|
||||
proxy_host = settings_manager.get('proxy_host', '').strip()
|
||||
proxy_port = settings_manager.get('proxy_port', '').strip()
|
||||
proxy_type = settings_manager.get('proxy_type', 'http').lower()
|
||||
proxy_username = settings_manager.get('proxy_username', '').strip()
|
||||
proxy_password = settings_manager.get('proxy_password', '').strip()
|
||||
|
||||
if proxy_host and proxy_port:
|
||||
# Build proxy URL
|
||||
@@ -146,7 +162,8 @@ class Downloader:
|
||||
|
||||
if use_auth:
|
||||
# Add CivitAI API key if available
|
||||
api_key = settings.get('civitai_api_key')
|
||||
settings_manager = get_settings_manager()
|
||||
api_key = settings_manager.get('civitai_api_key')
|
||||
if api_key:
|
||||
headers['Authorization'] = f'Bearer {api_key}'
|
||||
headers['Content-Type'] = 'application/json'
|
||||
@@ -157,10 +174,11 @@ class Downloader:
|
||||
self,
|
||||
url: str,
|
||||
save_path: str,
|
||||
progress_callback: Optional[Callable[[float], None]] = None,
|
||||
progress_callback: Optional[Callable[..., Awaitable[None]]] = None,
|
||||
use_auth: bool = False,
|
||||
custom_headers: Optional[Dict[str, str]] = None,
|
||||
allow_resume: bool = True
|
||||
allow_resume: bool = True,
|
||||
pause_event: Optional[asyncio.Event] = None,
|
||||
) -> Tuple[bool, str]:
|
||||
"""
|
||||
Download a file with resumable downloads and retry mechanism
|
||||
@@ -172,6 +190,7 @@ class Downloader:
|
||||
use_auth: Whether to include authentication headers (e.g., CivitAI API key)
|
||||
custom_headers: Additional headers to include in request
|
||||
allow_resume: Whether to support resumable downloads
|
||||
pause_event: Optional event that, when cleared, will pause streaming until set again
|
||||
|
||||
Returns:
|
||||
Tuple[bool, str]: (success, save_path or error message)
|
||||
@@ -246,7 +265,16 @@ class Downloader:
|
||||
if allow_resume:
|
||||
os.rename(part_path, save_path)
|
||||
if progress_callback:
|
||||
await progress_callback(100)
|
||||
await self._dispatch_progress_callback(
|
||||
progress_callback,
|
||||
DownloadProgress(
|
||||
percent_complete=100.0,
|
||||
bytes_downloaded=part_size,
|
||||
total_bytes=actual_size,
|
||||
bytes_per_second=0.0,
|
||||
timestamp=datetime.now().timestamp(),
|
||||
),
|
||||
)
|
||||
return True, save_path
|
||||
# Remove corrupted part file and restart
|
||||
os.remove(part_path)
|
||||
@@ -274,6 +302,8 @@ class Downloader:
|
||||
|
||||
current_size = resume_offset
|
||||
last_progress_report_time = datetime.now()
|
||||
progress_samples: deque[tuple[datetime, int]] = deque()
|
||||
progress_samples.append((last_progress_report_time, current_size))
|
||||
|
||||
# Ensure directory exists
|
||||
os.makedirs(os.path.dirname(save_path), exist_ok=True)
|
||||
@@ -283,18 +313,41 @@ class Downloader:
|
||||
mode = 'ab' if (allow_resume and resume_offset > 0) else 'wb'
|
||||
with open(part_path, mode) as f:
|
||||
async for chunk in response.content.iter_chunked(self.chunk_size):
|
||||
if pause_event is not None and not pause_event.is_set():
|
||||
await pause_event.wait()
|
||||
if chunk:
|
||||
# Run blocking file write in executor
|
||||
await loop.run_in_executor(None, f.write, chunk)
|
||||
current_size += len(chunk)
|
||||
|
||||
|
||||
# Limit progress update frequency to reduce overhead
|
||||
now = datetime.now()
|
||||
time_diff = (now - last_progress_report_time).total_seconds()
|
||||
|
||||
if progress_callback and total_size and time_diff >= 1.0:
|
||||
progress = (current_size / total_size) * 100
|
||||
await progress_callback(progress)
|
||||
|
||||
if progress_callback and time_diff >= 1.0:
|
||||
progress_samples.append((now, current_size))
|
||||
cutoff = now - timedelta(seconds=5)
|
||||
while progress_samples and progress_samples[0][0] < cutoff:
|
||||
progress_samples.popleft()
|
||||
|
||||
percent = (current_size / total_size) * 100 if total_size else 0.0
|
||||
bytes_per_second = 0.0
|
||||
if len(progress_samples) >= 2:
|
||||
first_time, first_bytes = progress_samples[0]
|
||||
last_time, last_bytes = progress_samples[-1]
|
||||
elapsed = (last_time - first_time).total_seconds()
|
||||
if elapsed > 0:
|
||||
bytes_per_second = (last_bytes - first_bytes) / elapsed
|
||||
|
||||
progress_snapshot = DownloadProgress(
|
||||
percent_complete=percent,
|
||||
bytes_downloaded=current_size,
|
||||
total_bytes=total_size or None,
|
||||
bytes_per_second=bytes_per_second,
|
||||
timestamp=now.timestamp(),
|
||||
)
|
||||
|
||||
await self._dispatch_progress_callback(progress_callback, progress_snapshot)
|
||||
last_progress_report_time = now
|
||||
|
||||
# Download completed successfully
|
||||
@@ -329,7 +382,15 @@ class Downloader:
|
||||
|
||||
# Ensure 100% progress is reported
|
||||
if progress_callback:
|
||||
await progress_callback(100)
|
||||
final_snapshot = DownloadProgress(
|
||||
percent_complete=100.0,
|
||||
bytes_downloaded=final_size,
|
||||
total_bytes=total_size or final_size,
|
||||
bytes_per_second=0.0,
|
||||
timestamp=datetime.now().timestamp(),
|
||||
)
|
||||
await self._dispatch_progress_callback(progress_callback, final_snapshot)
|
||||
|
||||
|
||||
return True, save_path
|
||||
|
||||
@@ -361,7 +422,24 @@ class Downloader:
|
||||
return False, str(e)
|
||||
|
||||
return False, f"Download failed after {self.max_retries + 1} attempts"
|
||||
|
||||
|
||||
async def _dispatch_progress_callback(
|
||||
self,
|
||||
progress_callback: Callable[..., Awaitable[None]],
|
||||
snapshot: DownloadProgress,
|
||||
) -> None:
|
||||
"""Invoke a progress callback while preserving backward compatibility."""
|
||||
|
||||
try:
|
||||
result = progress_callback(snapshot, snapshot)
|
||||
except TypeError:
|
||||
result = progress_callback(snapshot.percent_complete)
|
||||
|
||||
if asyncio.iscoroutine(result):
|
||||
await result
|
||||
elif hasattr(result, "__await__"):
|
||||
await result
|
||||
|
||||
async def download_to_memory(
|
||||
self,
|
||||
url: str,
|
||||
@@ -511,6 +589,19 @@ class Downloader:
|
||||
return False, "Access forbidden"
|
||||
elif response.status == 404:
|
||||
return False, "Resource not found"
|
||||
elif response.status == 429:
|
||||
retry_after = self._extract_retry_after(response.headers)
|
||||
error_msg = "Request rate limited"
|
||||
logger.warning(
|
||||
"Rate limit encountered for %s %s; retry_after=%s",
|
||||
method,
|
||||
url,
|
||||
retry_after,
|
||||
)
|
||||
return False, RateLimitError(
|
||||
error_msg,
|
||||
retry_after=retry_after,
|
||||
)
|
||||
else:
|
||||
return False, f"Request failed with status {response.status}"
|
||||
|
||||
@@ -532,6 +623,38 @@ class Downloader:
|
||||
await self._create_session()
|
||||
logger.info("HTTP session refreshed due to settings change")
|
||||
|
||||
@staticmethod
|
||||
def _extract_retry_after(headers) -> Optional[float]:
|
||||
"""Parse the Retry-After header into seconds."""
|
||||
if not headers:
|
||||
return None
|
||||
|
||||
header_value = headers.get("Retry-After")
|
||||
if not header_value:
|
||||
return None
|
||||
|
||||
header_value = header_value.strip()
|
||||
if not header_value:
|
||||
return None
|
||||
|
||||
if header_value.isdigit():
|
||||
try:
|
||||
seconds = float(header_value)
|
||||
except ValueError:
|
||||
return None
|
||||
return max(0.0, seconds)
|
||||
|
||||
try:
|
||||
retry_datetime = parsedate_to_datetime(header_value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
if retry_datetime.tzinfo is None:
|
||||
return None
|
||||
|
||||
delta = retry_datetime - datetime.now(tz=retry_datetime.tzinfo)
|
||||
return max(0.0, delta.total_seconds())
|
||||
|
||||
|
||||
# Global instance accessor
|
||||
async def get_downloader() -> Downloader:
|
||||
|
||||
@@ -11,13 +11,14 @@ logger = logging.getLogger(__name__)
|
||||
class EmbeddingService(BaseModelService):
|
||||
"""Embedding-specific service implementation"""
|
||||
|
||||
def __init__(self, scanner):
|
||||
def __init__(self, scanner, update_service=None):
|
||||
"""Initialize Embedding service
|
||||
|
||||
Args:
|
||||
scanner: Embedding scanner instance
|
||||
update_service: Optional service for remote update tracking.
|
||||
"""
|
||||
super().__init__("embedding", scanner, EmbeddingMetadata)
|
||||
super().__init__("embedding", scanner, EmbeddingMetadata, update_service=update_service)
|
||||
|
||||
async def format_response(self, embedding_data: Dict) -> Dict:
|
||||
"""Format Embedding data for API response"""
|
||||
|
||||
21
py/services/errors.py
Normal file
21
py/services/errors.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Common service-level exception types."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class RateLimitError(RuntimeError):
|
||||
"""Raised when a remote provider rejects a request due to rate limiting."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
retry_after: Optional[float] = None,
|
||||
provider: Optional[str] = None,
|
||||
) -> None:
|
||||
super().__init__(message)
|
||||
self.retry_after = retry_after
|
||||
self.provider = provider
|
||||
|
||||
@@ -11,7 +11,7 @@ from pathlib import Path
|
||||
from typing import Dict, List, Tuple
|
||||
|
||||
from .service_registry import ServiceRegistry
|
||||
from .settings_manager import settings
|
||||
from .settings_manager import get_settings_manager
|
||||
from ..utils.example_images_paths import iter_library_roots
|
||||
|
||||
|
||||
@@ -62,7 +62,8 @@ class ExampleImagesCleanupService:
|
||||
async def cleanup_example_image_folders(self) -> Dict[str, object]:
|
||||
"""Clean empty or orphaned example image folders by moving them under a deleted bucket."""
|
||||
|
||||
example_images_path = settings.get("example_images_path")
|
||||
settings_manager = get_settings_manager()
|
||||
example_images_path = settings_manager.get("example_images_path")
|
||||
if not example_images_path:
|
||||
logger.debug("Cleanup skipped: example images path not configured")
|
||||
return {
|
||||
|
||||
@@ -11,13 +11,14 @@ logger = logging.getLogger(__name__)
|
||||
class LoraService(BaseModelService):
|
||||
"""LoRA-specific service implementation"""
|
||||
|
||||
def __init__(self, scanner):
|
||||
def __init__(self, scanner, update_service=None):
|
||||
"""Initialize LoRA service
|
||||
|
||||
Args:
|
||||
scanner: LoRA scanner instance
|
||||
update_service: Optional service for remote update tracking.
|
||||
"""
|
||||
super().__init__("lora", scanner, LoraMetadata)
|
||||
super().__init__("lora", scanner, LoraMetadata, update_service=update_service)
|
||||
|
||||
async def format_response(self, lora_data: Dict) -> Dict:
|
||||
"""Format LoRA data for API response"""
|
||||
@@ -178,4 +179,4 @@ class LoraService(BaseModelService):
|
||||
|
||||
def find_duplicate_filenames(self) -> Dict:
|
||||
"""Find LoRAs with conflicting filenames"""
|
||||
return self.scanner._hash_index.get_duplicate_filenames()
|
||||
return self.scanner._hash_index.get_duplicate_filenames()
|
||||
|
||||
@@ -3,7 +3,7 @@ import logging
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from .downloader import get_downloader
|
||||
from .downloader import get_downloader, DownloadProgress
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -77,9 +77,15 @@ class MetadataArchiveManager:
|
||||
progress_callback("download", f"Downloading from {url}")
|
||||
|
||||
# Custom progress callback to report download progress
|
||||
async def download_progress(progress):
|
||||
async def download_progress(progress, snapshot=None):
|
||||
if progress_callback:
|
||||
progress_callback("download", f"Downloading archive... {progress:.1f}%")
|
||||
if isinstance(progress, DownloadProgress):
|
||||
percent = progress.percent_complete
|
||||
elif isinstance(snapshot, DownloadProgress):
|
||||
percent = snapshot.percent_complete
|
||||
else:
|
||||
percent = float(progress or 0)
|
||||
progress_callback("download", f"Downloading archive... {percent:.1f}%")
|
||||
|
||||
success, result = await downloader.download_file(
|
||||
url=url,
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
import os
|
||||
import logging
|
||||
from .model_metadata_provider import (
|
||||
ModelMetadataProvider,
|
||||
ModelMetadataProviderManager,
|
||||
SQLiteModelMetadataProvider,
|
||||
CivitaiModelMetadataProvider,
|
||||
CivArchiveModelMetadataProvider,
|
||||
FallbackMetadataProvider
|
||||
)
|
||||
from .settings_manager import settings
|
||||
from .settings_manager import get_settings_manager
|
||||
from .metadata_archive_manager import MetadataArchiveManager
|
||||
from .service_registry import ServiceRegistry
|
||||
|
||||
@@ -21,7 +23,8 @@ async def initialize_metadata_providers():
|
||||
provider_manager.default_provider = None
|
||||
|
||||
# Get settings
|
||||
enable_archive_db = settings.get('enable_metadata_archive_db', False)
|
||||
settings_manager = get_settings_manager()
|
||||
enable_archive_db = settings_manager.get('enable_metadata_archive_db', False)
|
||||
|
||||
providers = []
|
||||
|
||||
@@ -53,26 +56,27 @@ async def initialize_metadata_providers():
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize Civitai API metadata provider: {e}")
|
||||
|
||||
# Register CivArchive provider, but do NOT add to fallback providers
|
||||
# Register CivArchive provider, and all add to fallback providers
|
||||
try:
|
||||
from .model_metadata_provider import CivArchiveModelMetadataProvider
|
||||
civarchive_provider = CivArchiveModelMetadataProvider()
|
||||
provider_manager.register_provider('civarchive', civarchive_provider)
|
||||
logger.debug("CivArchive metadata provider registered (not included in fallback)")
|
||||
civarchive_client = await ServiceRegistry.get_civarchive_client()
|
||||
civarchive_provider = CivArchiveModelMetadataProvider(civarchive_client)
|
||||
provider_manager.register_provider('civarchive_api', civarchive_provider)
|
||||
providers.append(('civarchive_api', civarchive_provider))
|
||||
logger.debug("CivArchive metadata provider registered (also included in fallback)")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize CivArchive metadata provider: {e}")
|
||||
|
||||
# Set up fallback provider based on available providers
|
||||
if len(providers) > 1:
|
||||
# Always use Civitai API first, then Archive DB
|
||||
ordered_providers = []
|
||||
ordered_providers.extend([p[1] for p in providers if p[0] == 'civitai_api'])
|
||||
ordered_providers.extend([p[1] for p in providers if p[0] == 'sqlite'])
|
||||
# Always use Civitai API (it has better metadata), then CivArchive API, then Archive DB
|
||||
ordered_providers: list[tuple[str, ModelMetadataProvider]] = []
|
||||
ordered_providers.extend([p for p in providers if p[0] == 'civitai_api'])
|
||||
ordered_providers.extend([p for p in providers if p[0] == 'civarchive_api'])
|
||||
ordered_providers.extend([p for p in providers if p[0] == 'sqlite'])
|
||||
|
||||
if ordered_providers:
|
||||
fallback_provider = FallbackMetadataProvider(ordered_providers)
|
||||
provider_manager.register_provider('fallback', fallback_provider, is_default=True)
|
||||
logger.debug(f"Fallback metadata provider registered with {len(ordered_providers)} providers, Civitai API first")
|
||||
elif len(providers) == 1:
|
||||
# Only one provider available, set it as default
|
||||
provider_name, provider = providers[0]
|
||||
@@ -87,7 +91,8 @@ async def update_metadata_providers():
|
||||
"""Update metadata providers based on current settings"""
|
||||
try:
|
||||
# Get current settings
|
||||
enable_archive_db = settings.get('enable_metadata_archive_db', False)
|
||||
settings_manager = get_settings_manager()
|
||||
enable_archive_db = settings_manager.get('enable_metadata_archive_db', False)
|
||||
|
||||
# Reinitialize all providers with new settings
|
||||
provider_manager = await initialize_metadata_providers()
|
||||
|
||||
@@ -10,6 +10,7 @@ from typing import Any, Awaitable, Callable, Dict, Iterable, Optional
|
||||
|
||||
from ..services.settings_manager import SettingsManager
|
||||
from ..utils.model_utils import determine_base_model
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -153,7 +154,12 @@ class MetadataSyncService:
|
||||
model_data: Dict[str, Any],
|
||||
update_cache_func: Callable[[str, str, Dict[str, Any]], Awaitable[bool]],
|
||||
) -> tuple[bool, Optional[str]]:
|
||||
"""Fetch metadata for a model and update both disk and cache state."""
|
||||
"""Fetch metadata for a model and update both disk and cache state.
|
||||
|
||||
Callers should hydrate ``model_data`` via ``MetadataManager.hydrate_model_data``
|
||||
before invoking this method so that the persisted payload retains all known
|
||||
metadata fields.
|
||||
"""
|
||||
|
||||
if not isinstance(model_data, dict):
|
||||
error = f"Invalid model_data type: {type(model_data)}"
|
||||
@@ -162,42 +168,118 @@ class MetadataSyncService:
|
||||
|
||||
metadata_path = os.path.splitext(file_path)[0] + ".metadata.json"
|
||||
enable_archive = self._settings.get("enable_metadata_archive_db", False)
|
||||
previous_source = model_data.get("metadata_source") or (model_data.get("civitai") or {}).get("source")
|
||||
|
||||
try:
|
||||
provider_attempts: list[tuple[Optional[str], MetadataProviderProtocol]] = []
|
||||
sqlite_attempted = False
|
||||
|
||||
if model_data.get("civitai_deleted") is True:
|
||||
if not enable_archive or model_data.get("db_checked") is True:
|
||||
if previous_source in (None, "civarchive"):
|
||||
try:
|
||||
provider_attempts.append(("civarchive_api", await self._get_provider("civarchive_api")))
|
||||
except Exception as exc: # pragma: no cover - provider resolution fault
|
||||
logger.debug("Unable to resolve civarchive provider: %s", exc)
|
||||
|
||||
if enable_archive and model_data.get("db_checked") is not True:
|
||||
try:
|
||||
provider_attempts.append(("sqlite", await self._get_provider("sqlite")))
|
||||
except Exception as exc: # pragma: no cover - provider resolution fault
|
||||
logger.debug("Unable to resolve sqlite provider: %s", exc)
|
||||
|
||||
if not provider_attempts:
|
||||
if not enable_archive:
|
||||
error_msg = "CivitAI model is deleted and metadata archive DB is not enabled"
|
||||
else:
|
||||
elif model_data.get("db_checked") is True:
|
||||
error_msg = "CivitAI model is deleted and not found in metadata archive DB"
|
||||
return (False, error_msg)
|
||||
metadata_provider = await self._get_provider("sqlite")
|
||||
else:
|
||||
error_msg = "CivitAI model is deleted and no archive provider is available"
|
||||
return False, error_msg
|
||||
else:
|
||||
metadata_provider = await self._get_default_provider()
|
||||
provider_attempts.append((None, await self._get_default_provider()))
|
||||
|
||||
civitai_metadata, error = await metadata_provider.get_model_by_hash(sha256)
|
||||
if not civitai_metadata:
|
||||
if error == "Model not found":
|
||||
civitai_metadata: Optional[Dict[str, Any]] = None
|
||||
metadata_provider: Optional[MetadataProviderProtocol] = None
|
||||
provider_used: Optional[str] = None
|
||||
last_error: Optional[str] = None
|
||||
|
||||
for provider_name, provider in provider_attempts:
|
||||
try:
|
||||
civitai_metadata_candidate, error = await provider.get_model_by_hash(sha256)
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or (provider_name or provider.__class__.__name__)
|
||||
raise
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.error("Provider %s failed for hash %s: %s", provider_name, sha256, exc)
|
||||
civitai_metadata_candidate, error = None, str(exc)
|
||||
|
||||
if provider_name == "sqlite":
|
||||
sqlite_attempted = True
|
||||
|
||||
if civitai_metadata_candidate:
|
||||
civitai_metadata = civitai_metadata_candidate
|
||||
metadata_provider = provider
|
||||
provider_used = provider_name
|
||||
break
|
||||
|
||||
last_error = error or last_error
|
||||
|
||||
if civitai_metadata is None or metadata_provider is None:
|
||||
if sqlite_attempted:
|
||||
model_data["db_checked"] = True
|
||||
|
||||
if last_error == "Model not found":
|
||||
model_data["from_civitai"] = False
|
||||
model_data["civitai_deleted"] = True
|
||||
model_data["db_checked"] = enable_archive
|
||||
model_data["db_checked"] = sqlite_attempted or (enable_archive and model_data.get("db_checked", False))
|
||||
model_data["last_checked_at"] = datetime.now().timestamp()
|
||||
|
||||
data_to_save = model_data.copy()
|
||||
data_to_save.pop("folder", None)
|
||||
await self._metadata_manager.save_metadata(file_path, data_to_save)
|
||||
|
||||
default_error = (
|
||||
"CivitAI model is deleted and metadata archive DB is not enabled"
|
||||
if model_data.get("civitai_deleted") and not enable_archive
|
||||
else "CivitAI model is deleted and not found in metadata archive DB"
|
||||
if model_data.get("civitai_deleted") and (model_data.get("db_checked") is True or sqlite_attempted)
|
||||
else "No provider returned metadata"
|
||||
)
|
||||
|
||||
error_msg = (
|
||||
f"Error fetching metadata: {error} (model_name={model_data.get('model_name', '')})"
|
||||
f"Error fetching metadata: {last_error or default_error} "
|
||||
f"(model_name={model_data.get('model_name', '')})"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
return False, error_msg
|
||||
|
||||
model_data["from_civitai"] = True
|
||||
model_data["civitai_deleted"] = civitai_metadata.get("source") == "archive_db"
|
||||
model_data["db_checked"] = enable_archive
|
||||
model_data["civitai_deleted"] = civitai_metadata.get("source") == "archive_db" or civitai_metadata.get("source") == "civarchive"
|
||||
model_data["db_checked"] = enable_archive and (
|
||||
civitai_metadata.get("source") == "archive_db" or sqlite_attempted
|
||||
)
|
||||
source = civitai_metadata.get("source") or "civitai_api"
|
||||
if source == "api":
|
||||
source = "civitai_api"
|
||||
elif provider_used == "civarchive_api" and source != "civarchive":
|
||||
source = "civarchive"
|
||||
elif provider_used == "sqlite":
|
||||
source = "archive_db"
|
||||
model_data["metadata_source"] = source
|
||||
model_data["last_checked_at"] = datetime.now().timestamp()
|
||||
|
||||
readable_source = {
|
||||
"civitai_api": "CivitAI API",
|
||||
"civarchive": "CivArchive API",
|
||||
"archive_db": "Archive Database",
|
||||
}.get(source, source)
|
||||
|
||||
logger.info(
|
||||
"Fetched metadata for %s via %s",
|
||||
model_data.get("model_name", ""),
|
||||
readable_source,
|
||||
)
|
||||
|
||||
local_metadata = model_data.copy()
|
||||
local_metadata.pop("folder", None)
|
||||
|
||||
@@ -221,6 +303,16 @@ class MetadataSyncService:
|
||||
error_msg = f"Error fetching metadata - Missing key: {exc} in model_data={model_data}"
|
||||
logger.error(error_msg)
|
||||
return False, error_msg
|
||||
except RateLimitError as exc:
|
||||
provider_label = exc.provider or "metadata provider"
|
||||
wait_hint = (
|
||||
f"; retry after approximately {int(exc.retry_after)}s"
|
||||
if exc.retry_after and exc.retry_after > 0
|
||||
else ""
|
||||
)
|
||||
error_msg = f"Rate limited by {provider_label}{wait_hint}"
|
||||
logger.warning(error_msg)
|
||||
return False, error_msg
|
||||
except Exception as exc: # pragma: no cover - error path
|
||||
error_msg = f"Error fetching metadata: {exc}"
|
||||
logger.error(error_msg, exc_info=True)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import asyncio
|
||||
from typing import List, Dict, Tuple
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
from dataclasses import dataclass, field
|
||||
from operator import itemgetter
|
||||
from natsort import natsorted
|
||||
|
||||
@@ -17,10 +17,12 @@ SUPPORTED_SORT_MODES = [
|
||||
|
||||
@dataclass
|
||||
class ModelCache:
|
||||
"""Cache structure for model data with extensible sorting"""
|
||||
"""Cache structure for model data with extensible sorting."""
|
||||
|
||||
raw_data: List[Dict]
|
||||
folders: List[str]
|
||||
|
||||
version_index: Dict[int, Dict] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self):
|
||||
self._lock = asyncio.Lock()
|
||||
# Cache for last sort: (sort_key, order) -> sorted list
|
||||
@@ -28,6 +30,58 @@ class ModelCache:
|
||||
self._last_sorted_data: List[Dict] = []
|
||||
# Default sort on init
|
||||
asyncio.create_task(self.resort())
|
||||
self.rebuild_version_index()
|
||||
|
||||
@staticmethod
|
||||
def _normalize_version_id(value: Any) -> Optional[int]:
|
||||
"""Normalize a potential version identifier into an integer."""
|
||||
|
||||
if isinstance(value, int):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
try:
|
||||
return int(value)
|
||||
except ValueError:
|
||||
return None
|
||||
return None
|
||||
|
||||
def rebuild_version_index(self) -> None:
|
||||
"""Rebuild the version index from the current raw data."""
|
||||
|
||||
self.version_index = {}
|
||||
for item in self.raw_data:
|
||||
self.add_to_version_index(item)
|
||||
|
||||
def add_to_version_index(self, item: Dict) -> None:
|
||||
"""Register a cache item in the version index if possible."""
|
||||
|
||||
civitai_data = item.get('civitai') if isinstance(item, dict) else None
|
||||
if not isinstance(civitai_data, dict):
|
||||
return
|
||||
|
||||
version_id = self._normalize_version_id(civitai_data.get('id'))
|
||||
if version_id is None:
|
||||
return
|
||||
|
||||
self.version_index[version_id] = item
|
||||
|
||||
def remove_from_version_index(self, item: Dict) -> None:
|
||||
"""Remove a cache item from the version index if present."""
|
||||
|
||||
civitai_data = item.get('civitai') if isinstance(item, dict) else None
|
||||
if not isinstance(civitai_data, dict):
|
||||
return
|
||||
|
||||
version_id = self._normalize_version_id(civitai_data.get('id'))
|
||||
if version_id is None:
|
||||
return
|
||||
|
||||
existing = self.version_index.get(version_id)
|
||||
if existing is item or (
|
||||
isinstance(existing, dict)
|
||||
and existing.get('file_path') == item.get('file_path')
|
||||
):
|
||||
self.version_index.pop(version_id, None)
|
||||
|
||||
async def resort(self):
|
||||
"""Resort cached data according to last sort mode if set"""
|
||||
@@ -41,6 +95,7 @@ class ModelCache:
|
||||
|
||||
all_folders = set(l['folder'] for l in self.raw_data)
|
||||
self.folders = sorted(list(all_folders), key=lambda x: x.lower())
|
||||
self.rebuild_version_index()
|
||||
|
||||
def _sort_data(self, data: List[Dict], sort_key: str, order: str) -> List[Dict]:
|
||||
"""Sort data by sort_key and order"""
|
||||
|
||||
@@ -6,7 +6,7 @@ from abc import ABC, abstractmethod
|
||||
|
||||
from ..utils.utils import calculate_relative_path_for_model, remove_empty_dirs
|
||||
from ..utils.constants import AUTO_ORGANIZE_BATCH_SIZE
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -114,7 +114,8 @@ class ModelFileService:
|
||||
raise ValueError('No model roots configured')
|
||||
|
||||
# Check if flat structure is configured for this model type
|
||||
path_template = settings.get_download_path_template(self.model_type)
|
||||
settings_manager = get_settings_manager()
|
||||
path_template = settings_manager.get_download_path_template(self.model_type)
|
||||
result.is_flat_structure = not path_template
|
||||
|
||||
# Initialize tracking
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
from abc import ABC, abstractmethod
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional, Dict, Tuple, Any
|
||||
import random
|
||||
from typing import Optional, Dict, Tuple, Any, List, Sequence
|
||||
from .downloader import get_downloader
|
||||
from .errors import RateLimitError
|
||||
|
||||
try:
|
||||
from bs4 import BeautifulSoup
|
||||
@@ -61,6 +64,11 @@ class ModelMetadataProvider(ABC):
|
||||
"""Fetch model version metadata"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
"""Fetch models owned by the specified user"""
|
||||
pass
|
||||
|
||||
class CivitaiModelMetadataProvider(ModelMetadataProvider):
|
||||
"""Provider that uses Civitai API for metadata"""
|
||||
|
||||
@@ -79,123 +87,30 @@ class CivitaiModelMetadataProvider(ModelMetadataProvider):
|
||||
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
return await self.client.get_model_version_info(version_id)
|
||||
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
return await self.client.get_user_models(username)
|
||||
|
||||
class CivArchiveModelMetadataProvider(ModelMetadataProvider):
|
||||
"""Provider that uses CivArchive HTML page parsing for metadata"""
|
||||
"""Provider that uses CivArchive API for metadata"""
|
||||
|
||||
def __init__(self, civarchive_client):
|
||||
self.client = civarchive_client
|
||||
|
||||
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
"""Not supported by CivArchive provider"""
|
||||
return None, "CivArchive provider does not support hash lookup"
|
||||
return await self.client.get_model_by_hash(model_hash)
|
||||
|
||||
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
|
||||
"""Not supported by CivArchive provider"""
|
||||
return None
|
||||
return await self.client.get_model_versions(model_id)
|
||||
|
||||
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
|
||||
"""Get specific model version by parsing CivArchive HTML page"""
|
||||
if model_id is None or version_id is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
# Construct CivArchive URL
|
||||
url = f"https://civarchive.com/models/{model_id}?modelVersionId={version_id}"
|
||||
|
||||
downloader = await get_downloader()
|
||||
session = await downloader.session
|
||||
async with session.get(url) as response:
|
||||
if response.status != 200:
|
||||
return None
|
||||
|
||||
html_content = await response.text()
|
||||
|
||||
# Parse HTML to extract JSON data
|
||||
soup_parser = _require_beautifulsoup()
|
||||
soup = soup_parser(html_content, 'html.parser')
|
||||
script_tag = soup.find('script', {'id': '__NEXT_DATA__', 'type': 'application/json'})
|
||||
|
||||
if not script_tag:
|
||||
return None
|
||||
|
||||
# Parse JSON content
|
||||
json_data = json.loads(script_tag.string)
|
||||
model_data = json_data.get('props', {}).get('pageProps', {}).get('model')
|
||||
|
||||
if not model_data or 'version' not in model_data:
|
||||
return None
|
||||
|
||||
# Extract version data as base
|
||||
version = model_data['version'].copy()
|
||||
|
||||
# Restructure stats
|
||||
if 'downloadCount' in version and 'ratingCount' in version and 'rating' in version:
|
||||
version['stats'] = {
|
||||
'downloadCount': version.pop('downloadCount'),
|
||||
'ratingCount': version.pop('ratingCount'),
|
||||
'rating': version.pop('rating')
|
||||
}
|
||||
|
||||
# Rename trigger to trainedWords
|
||||
if 'trigger' in version:
|
||||
version['trainedWords'] = version.pop('trigger')
|
||||
|
||||
# Transform files data to expected format
|
||||
if 'files' in version:
|
||||
transformed_files = []
|
||||
for file_data in version['files']:
|
||||
# Find first available mirror (deletedAt is null)
|
||||
available_mirror = None
|
||||
for mirror in file_data.get('mirrors', []):
|
||||
if mirror.get('deletedAt') is None:
|
||||
available_mirror = mirror
|
||||
break
|
||||
|
||||
# Create transformed file entry
|
||||
transformed_file = {
|
||||
'id': file_data.get('id'),
|
||||
'sizeKB': file_data.get('sizeKB'),
|
||||
'name': available_mirror.get('filename', file_data.get('name')) if available_mirror else file_data.get('name'),
|
||||
'type': file_data.get('type'),
|
||||
'downloadUrl': available_mirror.get('url') if available_mirror else None,
|
||||
'primary': True,
|
||||
'mirrors': file_data.get('mirrors', [])
|
||||
}
|
||||
|
||||
# Transform hash format
|
||||
if 'sha256' in file_data:
|
||||
transformed_file['hashes'] = {
|
||||
'SHA256': file_data['sha256'].upper()
|
||||
}
|
||||
|
||||
transformed_files.append(transformed_file)
|
||||
|
||||
version['files'] = transformed_files
|
||||
|
||||
# Add model information
|
||||
version['model'] = {
|
||||
'name': model_data.get('name'),
|
||||
'type': model_data.get('type'),
|
||||
'nsfw': model_data.get('is_nsfw', False),
|
||||
'description': model_data.get('description'),
|
||||
'tags': model_data.get('tags', [])
|
||||
}
|
||||
|
||||
version['creator'] = {
|
||||
'username': model_data.get('username'),
|
||||
'image': ''
|
||||
}
|
||||
|
||||
# Add source identifier
|
||||
version['source'] = 'civarchive'
|
||||
version['is_deleted'] = json_data.get('query', {}).get('is_deleted', False)
|
||||
|
||||
return version
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching CivArchive model version {model_id}/{version_id}: {e}")
|
||||
return None
|
||||
return await self.client.get_model_version(model_id, version_id)
|
||||
|
||||
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
"""Not supported by CivArchive provider - requires both model_id and version_id"""
|
||||
return None, "CivArchive provider requires both model_id and version_id"
|
||||
return await self.client.get_model_version_info(version_id)
|
||||
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
"""Not supported by CivArchive provider"""
|
||||
return None
|
||||
|
||||
class SQLiteModelMetadataProvider(ModelMetadataProvider):
|
||||
"""Provider that uses SQLite database for metadata"""
|
||||
@@ -329,20 +244,24 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
|
||||
"""Fetch model version metadata from SQLite database"""
|
||||
async with self._aiosqlite.connect(self.db_path) as db:
|
||||
db.row_factory = self._aiosqlite.Row
|
||||
|
||||
|
||||
# Get version details
|
||||
version_query = "SELECT model_id FROM model_versions WHERE id = ?"
|
||||
cursor = await db.execute(version_query, (version_id,))
|
||||
version_row = await cursor.fetchone()
|
||||
|
||||
|
||||
if not version_row:
|
||||
return None, "Model version not found"
|
||||
|
||||
|
||||
model_id = version_row['model_id']
|
||||
|
||||
|
||||
# Build complete version data with model info
|
||||
version_data = await self._get_version_with_model_data(db, model_id, version_id)
|
||||
return version_data, None
|
||||
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
"""Listing models by username is not supported for archive database"""
|
||||
return None
|
||||
|
||||
async def _get_version_with_model_data(self, db, model_id, version_id) -> Optional[Dict]:
|
||||
"""Helper to build version data with model information"""
|
||||
@@ -434,53 +353,166 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
|
||||
|
||||
class FallbackMetadataProvider(ModelMetadataProvider):
|
||||
"""Try providers in order, return first successful result."""
|
||||
def __init__(self, providers: list):
|
||||
self.providers = providers
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
providers: Sequence[ModelMetadataProvider | Tuple[str, ModelMetadataProvider]],
|
||||
*,
|
||||
rate_limit_retry_limit: int = 3,
|
||||
rate_limit_base_delay: float = 1.5,
|
||||
rate_limit_max_delay: float = 30.0,
|
||||
rate_limit_jitter_ratio: float = 0.2,
|
||||
) -> None:
|
||||
self.providers: List[ModelMetadataProvider] = []
|
||||
self._provider_labels: List[str] = []
|
||||
|
||||
for entry in providers:
|
||||
if isinstance(entry, tuple) and len(entry) == 2:
|
||||
name, provider = entry
|
||||
else:
|
||||
provider = entry
|
||||
name = provider.__class__.__name__
|
||||
self.providers.append(provider)
|
||||
self._provider_labels.append(str(name))
|
||||
|
||||
self._rate_limit_retry_limit = max(1, rate_limit_retry_limit)
|
||||
self._rate_limit_base_delay = rate_limit_base_delay
|
||||
self._rate_limit_max_delay = rate_limit_max_delay
|
||||
self._rate_limit_jitter_ratio = max(0.0, rate_limit_jitter_ratio)
|
||||
|
||||
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
for provider in self.providers:
|
||||
for provider, label in self._iter_providers():
|
||||
try:
|
||||
result, error = await provider.get_model_by_hash(model_hash)
|
||||
result, error = await self._call_with_rate_limit(
|
||||
label,
|
||||
provider.get_model_by_hash,
|
||||
model_hash,
|
||||
)
|
||||
if result:
|
||||
return result, error
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
except Exception as e:
|
||||
logger.debug(f"Provider failed for get_model_by_hash: {e}")
|
||||
logger.debug("Provider %s failed for get_model_by_hash: %s", label, e)
|
||||
continue
|
||||
return None, "Model not found"
|
||||
|
||||
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
|
||||
for provider in self.providers:
|
||||
for provider, label in self._iter_providers():
|
||||
try:
|
||||
result = await provider.get_model_versions(model_id)
|
||||
result = await self._call_with_rate_limit(
|
||||
label,
|
||||
provider.get_model_versions,
|
||||
model_id,
|
||||
)
|
||||
if result:
|
||||
return result
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
except Exception as e:
|
||||
logger.debug(f"Provider failed for get_model_versions: {e}")
|
||||
logger.debug("Provider %s failed for get_model_versions: %s", label, e)
|
||||
continue
|
||||
return None
|
||||
|
||||
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
|
||||
for provider in self.providers:
|
||||
for provider, label in self._iter_providers():
|
||||
try:
|
||||
result = await provider.get_model_version(model_id, version_id)
|
||||
result = await self._call_with_rate_limit(
|
||||
label,
|
||||
provider.get_model_version,
|
||||
model_id,
|
||||
version_id,
|
||||
)
|
||||
if result:
|
||||
return result
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
except Exception as e:
|
||||
logger.debug(f"Provider failed for get_model_version: {e}")
|
||||
logger.debug("Provider %s failed for get_model_version: %s", label, e)
|
||||
continue
|
||||
return None
|
||||
|
||||
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
|
||||
for provider in self.providers:
|
||||
for provider, label in self._iter_providers():
|
||||
try:
|
||||
result, error = await provider.get_model_version_info(version_id)
|
||||
result, error = await self._call_with_rate_limit(
|
||||
label,
|
||||
provider.get_model_version_info,
|
||||
version_id,
|
||||
)
|
||||
if result:
|
||||
return result, error
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
except Exception as e:
|
||||
logger.debug(f"Provider failed for get_model_version_info: {e}")
|
||||
logger.debug("Provider %s failed for get_model_version_info: %s", label, e)
|
||||
continue
|
||||
return None, "No provider could retrieve the data"
|
||||
|
||||
async def get_user_models(self, username: str) -> Optional[List[Dict]]:
|
||||
for provider, label in self._iter_providers():
|
||||
try:
|
||||
result = await self._call_with_rate_limit(
|
||||
label,
|
||||
provider.get_user_models,
|
||||
username,
|
||||
)
|
||||
if result is not None:
|
||||
return result
|
||||
except RateLimitError as exc:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
except Exception as e:
|
||||
logger.debug("Provider %s failed for get_user_models: %s", label, e)
|
||||
continue
|
||||
return None
|
||||
|
||||
def _iter_providers(self):
|
||||
return zip(self.providers, self._provider_labels)
|
||||
|
||||
async def _call_with_rate_limit(
|
||||
self,
|
||||
label: str,
|
||||
func,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
attempt = 0
|
||||
while True:
|
||||
try:
|
||||
return await func(*args, **kwargs)
|
||||
except RateLimitError as exc:
|
||||
attempt += 1
|
||||
if attempt >= self._rate_limit_retry_limit:
|
||||
exc.provider = exc.provider or label
|
||||
raise exc
|
||||
delay = self._calculate_rate_limit_delay(exc.retry_after, attempt)
|
||||
logger.warning(
|
||||
"Provider %s rate limited request; retrying in %.2fs (attempt %s/%s)",
|
||||
label,
|
||||
delay,
|
||||
attempt,
|
||||
self._rate_limit_retry_limit,
|
||||
)
|
||||
await asyncio.sleep(delay)
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
def _calculate_rate_limit_delay(self, retry_after: Optional[float], attempt: int) -> float:
|
||||
if retry_after is not None:
|
||||
return min(self._rate_limit_max_delay, max(0.0, retry_after))
|
||||
|
||||
base_delay = self._rate_limit_base_delay * (2 ** max(0, attempt - 1))
|
||||
jitter_span = base_delay * self._rate_limit_jitter_ratio
|
||||
if jitter_span > 0:
|
||||
base_delay += random.uniform(-jitter_span, jitter_span)
|
||||
|
||||
return min(self._rate_limit_max_delay, max(0.0, base_delay))
|
||||
|
||||
class ModelMetadataProviderManager:
|
||||
"""Manager for selecting and using model metadata providers"""
|
||||
|
||||
@@ -522,6 +554,11 @@ class ModelMetadataProviderManager:
|
||||
"""Fetch model version info using specified or default provider"""
|
||||
provider = self._get_provider(provider_name)
|
||||
return await provider.get_model_version_info(version_id)
|
||||
|
||||
async def get_user_models(self, username: str, provider_name: str = None) -> Optional[List[Dict]]:
|
||||
"""Fetch models owned by the specified user"""
|
||||
provider = self._get_provider(provider_name)
|
||||
return await provider.get_user_models(username)
|
||||
|
||||
def _get_provider(self, provider_name: str = None) -> ModelMetadataProvider:
|
||||
"""Get provider by name or default provider"""
|
||||
|
||||
@@ -119,6 +119,12 @@ class ModelScanner:
|
||||
if value not in (None, '', []):
|
||||
slim[key] = value
|
||||
|
||||
creator = civitai.get('creator')
|
||||
if isinstance(creator, Mapping):
|
||||
username = creator.get('username')
|
||||
if username:
|
||||
slim['creator'] = {'username': username}
|
||||
|
||||
trained_words = civitai.get('trainedWords')
|
||||
if trained_words:
|
||||
slim['trainedWords'] = list(trained_words) if isinstance(trained_words, list) else trained_words
|
||||
@@ -183,6 +189,7 @@ class ModelScanner:
|
||||
'favorite': bool(get_value('favorite', False)),
|
||||
'notes': notes,
|
||||
'usage_tips': usage_tips,
|
||||
'metadata_source': get_value('metadata_source', None),
|
||||
'exclude': bool(get_value('exclude', False)),
|
||||
'db_checked': bool(get_value('db_checked', False)),
|
||||
'last_checked_at': float(get_value('last_checked_at', 0.0) or 0.0),
|
||||
@@ -617,6 +624,7 @@ class ModelScanner:
|
||||
for i in range(0, len(new_files), batch_size):
|
||||
batch = new_files[i:i+batch_size]
|
||||
for path in batch:
|
||||
logger.info(f"{self.model_type.capitalize()} Scanner: Processing {path}")
|
||||
try:
|
||||
# Find the appropriate root path for this file
|
||||
root_path = None
|
||||
@@ -634,7 +642,8 @@ class ModelScanner:
|
||||
if model_data:
|
||||
# Add to cache
|
||||
self._cache.raw_data.append(model_data)
|
||||
|
||||
self._cache.add_to_version_index(model_data)
|
||||
|
||||
# Update hash index if available
|
||||
if 'sha256' in model_data and 'file_path' in model_data:
|
||||
self._hash_index.add_entry(model_data['sha256'].lower(), model_data['file_path'])
|
||||
@@ -661,7 +670,9 @@ class ModelScanner:
|
||||
for path in missing_files:
|
||||
try:
|
||||
model_to_remove = path_to_item[path]
|
||||
|
||||
|
||||
self._cache.remove_from_version_index(model_to_remove)
|
||||
|
||||
# Update tags count
|
||||
for tag in model_to_remove.get('tags', []):
|
||||
if tag in self._tags_count:
|
||||
@@ -684,6 +695,8 @@ class ModelScanner:
|
||||
all_folders = set(item.get('folder', '') for item in self._cache.raw_data)
|
||||
self._cache.folders = sorted(list(all_folders), key=lambda x: x.lower())
|
||||
|
||||
self._cache.rebuild_version_index()
|
||||
|
||||
# Resort cache
|
||||
await self._cache.resort()
|
||||
|
||||
@@ -829,6 +842,8 @@ class ModelScanner:
|
||||
else:
|
||||
self._cache.raw_data = list(scan_result.raw_data)
|
||||
|
||||
self._cache.rebuild_version_index()
|
||||
|
||||
await self._cache.resort()
|
||||
|
||||
async def _gather_model_data(
|
||||
@@ -934,7 +949,8 @@ class ModelScanner:
|
||||
|
||||
# Add to cache
|
||||
self._cache.raw_data.append(metadata_dict)
|
||||
|
||||
self._cache.add_to_version_index(metadata_dict)
|
||||
|
||||
# Resort cache data
|
||||
await self._cache.resort()
|
||||
|
||||
@@ -1076,6 +1092,9 @@ class ModelScanner:
|
||||
cache = await self.get_cached_data()
|
||||
|
||||
existing_item = next((item for item in cache.raw_data if item['file_path'] == original_path), None)
|
||||
if existing_item:
|
||||
cache.remove_from_version_index(existing_item)
|
||||
|
||||
if existing_item and 'tags' in existing_item:
|
||||
for tag in existing_item.get('tags', []):
|
||||
if tag in self._tags_count:
|
||||
@@ -1106,6 +1125,7 @@ class ModelScanner:
|
||||
)
|
||||
|
||||
cache.raw_data.append(cache_entry)
|
||||
cache.add_to_version_index(cache_entry)
|
||||
|
||||
sha_value = cache_entry.get('sha256')
|
||||
if sha_value:
|
||||
@@ -1117,6 +1137,8 @@ class ModelScanner:
|
||||
for tag in cache_entry.get('tags', []):
|
||||
self._tags_count[tag] = self._tags_count.get(tag, 0) + 1
|
||||
|
||||
cache.rebuild_version_index()
|
||||
|
||||
await cache.resort()
|
||||
|
||||
if cache_modified:
|
||||
@@ -1339,11 +1361,12 @@ class ModelScanner:
|
||||
# Update hash index
|
||||
for model in models_to_remove:
|
||||
file_path = model['file_path']
|
||||
self._cache.remove_from_version_index(model)
|
||||
if hasattr(self, '_hash_index') and self._hash_index:
|
||||
# Get the hash and filename before removal for duplicate checking
|
||||
file_name = os.path.splitext(os.path.basename(file_path))[0]
|
||||
hash_val = model.get('sha256', '').lower()
|
||||
|
||||
|
||||
# Remove from hash index
|
||||
self._hash_index.remove_by_path(file_path, hash_val)
|
||||
|
||||
@@ -1352,8 +1375,9 @@ class ModelScanner:
|
||||
|
||||
# Update cache data
|
||||
self._cache.raw_data = [item for item in self._cache.raw_data if item['file_path'] not in file_paths]
|
||||
|
||||
|
||||
# Resort cache
|
||||
self._cache.rebuild_version_index()
|
||||
await self._cache.resort()
|
||||
|
||||
await self._persist_current_cache()
|
||||
@@ -1393,16 +1417,17 @@ class ModelScanner:
|
||||
Returns:
|
||||
bool: True if the model version exists, False otherwise
|
||||
"""
|
||||
try:
|
||||
normalized_id = int(model_version_id)
|
||||
except (TypeError, ValueError):
|
||||
return False
|
||||
|
||||
try:
|
||||
cache = await self.get_cached_data()
|
||||
if not cache or not cache.raw_data:
|
||||
if not cache:
|
||||
return False
|
||||
|
||||
for item in cache.raw_data:
|
||||
if item.get('civitai') and item['civitai'].get('id') == model_version_id:
|
||||
return True
|
||||
|
||||
return False
|
||||
return normalized_id in cache.version_index
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking model version existence: {e}")
|
||||
return False
|
||||
|
||||
411
py/services/model_update_service.py
Normal file
411
py/services/model_update_service.py
Normal file
@@ -0,0 +1,411 @@
|
||||
"""Service for tracking remote model version updates."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sqlite3
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Iterable, List, Mapping, Optional, Sequence
|
||||
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelUpdateRecord:
|
||||
"""Representation of a persisted update record."""
|
||||
|
||||
model_type: str
|
||||
model_id: int
|
||||
largest_version_id: Optional[int]
|
||||
version_ids: List[int]
|
||||
in_library_version_ids: List[int]
|
||||
last_checked_at: Optional[float]
|
||||
should_ignore: bool
|
||||
|
||||
def has_update(self) -> bool:
|
||||
"""Return True when remote versions exceed the local library."""
|
||||
|
||||
if self.should_ignore or not self.version_ids:
|
||||
return False
|
||||
local_versions = set(self.in_library_version_ids)
|
||||
return any(version_id not in local_versions for version_id in self.version_ids)
|
||||
|
||||
|
||||
class ModelUpdateService:
|
||||
"""Persist and query remote model version metadata."""
|
||||
|
||||
_SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS model_update_status (
|
||||
model_type TEXT NOT NULL,
|
||||
model_id INTEGER NOT NULL,
|
||||
largest_version_id INTEGER,
|
||||
version_ids TEXT,
|
||||
in_library_version_ids TEXT,
|
||||
last_checked_at REAL,
|
||||
should_ignore INTEGER DEFAULT 0,
|
||||
PRIMARY KEY (model_type, model_id)
|
||||
)
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: str, *, ttl_seconds: int = 24 * 60 * 60) -> None:
|
||||
self._db_path = db_path
|
||||
self._ttl_seconds = ttl_seconds
|
||||
self._lock = asyncio.Lock()
|
||||
self._schema_initialized = False
|
||||
self._ensure_directory()
|
||||
self._initialize_schema()
|
||||
|
||||
def _ensure_directory(self) -> None:
|
||||
directory = os.path.dirname(self._db_path)
|
||||
if directory:
|
||||
os.makedirs(directory, exist_ok=True)
|
||||
|
||||
def _connect(self) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(self._db_path, check_same_thread=False)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
def _initialize_schema(self) -> None:
|
||||
if self._schema_initialized:
|
||||
return
|
||||
try:
|
||||
with self._connect() as conn:
|
||||
conn.execute("PRAGMA journal_mode=WAL")
|
||||
conn.execute("PRAGMA foreign_keys = ON")
|
||||
conn.executescript(self._SCHEMA)
|
||||
self._schema_initialized = True
|
||||
except Exception as exc: # pragma: no cover - defensive guard
|
||||
logger.error("Failed to initialize update schema: %s", exc, exc_info=True)
|
||||
raise
|
||||
|
||||
async def refresh_for_model_type(
|
||||
self,
|
||||
model_type: str,
|
||||
scanner,
|
||||
metadata_provider,
|
||||
*,
|
||||
force_refresh: bool = False,
|
||||
) -> Dict[int, ModelUpdateRecord]:
|
||||
"""Refresh update information for every model present in the cache."""
|
||||
|
||||
local_versions = await self._collect_local_versions(scanner)
|
||||
results: Dict[int, ModelUpdateRecord] = {}
|
||||
for model_id, version_ids in local_versions.items():
|
||||
record = await self._refresh_single_model(
|
||||
model_type,
|
||||
model_id,
|
||||
version_ids,
|
||||
metadata_provider,
|
||||
force_refresh=force_refresh,
|
||||
)
|
||||
if record:
|
||||
results[model_id] = record
|
||||
return results
|
||||
|
||||
async def refresh_single_model(
|
||||
self,
|
||||
model_type: str,
|
||||
model_id: int,
|
||||
scanner,
|
||||
metadata_provider,
|
||||
*,
|
||||
force_refresh: bool = False,
|
||||
) -> Optional[ModelUpdateRecord]:
|
||||
"""Refresh update information for a specific model id."""
|
||||
|
||||
local_versions = await self._collect_local_versions(scanner)
|
||||
version_ids = local_versions.get(model_id, [])
|
||||
return await self._refresh_single_model(
|
||||
model_type,
|
||||
model_id,
|
||||
version_ids,
|
||||
metadata_provider,
|
||||
force_refresh=force_refresh,
|
||||
)
|
||||
|
||||
async def update_in_library_versions(
|
||||
self,
|
||||
model_type: str,
|
||||
model_id: int,
|
||||
version_ids: Sequence[int],
|
||||
) -> ModelUpdateRecord:
|
||||
"""Persist a new set of in-library version identifiers."""
|
||||
|
||||
normalized_versions = self._normalize_sequence(version_ids)
|
||||
async with self._lock:
|
||||
existing = self._get_record(model_type, model_id)
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=existing.largest_version_id if existing else None,
|
||||
version_ids=list(existing.version_ids) if existing else [],
|
||||
in_library_version_ids=normalized_versions,
|
||||
last_checked_at=existing.last_checked_at if existing else None,
|
||||
should_ignore=existing.should_ignore if existing else False,
|
||||
)
|
||||
self._upsert_record(record)
|
||||
return record
|
||||
|
||||
async def set_should_ignore(
|
||||
self, model_type: str, model_id: int, should_ignore: bool
|
||||
) -> ModelUpdateRecord:
|
||||
"""Toggle the ignore flag for a model."""
|
||||
|
||||
async with self._lock:
|
||||
existing = self._get_record(model_type, model_id)
|
||||
if existing:
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=existing.largest_version_id,
|
||||
version_ids=list(existing.version_ids),
|
||||
in_library_version_ids=list(existing.in_library_version_ids),
|
||||
last_checked_at=existing.last_checked_at,
|
||||
should_ignore=should_ignore,
|
||||
)
|
||||
else:
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=None,
|
||||
version_ids=[],
|
||||
in_library_version_ids=[],
|
||||
last_checked_at=None,
|
||||
should_ignore=should_ignore,
|
||||
)
|
||||
self._upsert_record(record)
|
||||
return record
|
||||
|
||||
async def get_record(self, model_type: str, model_id: int) -> Optional[ModelUpdateRecord]:
|
||||
"""Return a cached record without triggering remote fetches."""
|
||||
|
||||
async with self._lock:
|
||||
return self._get_record(model_type, model_id)
|
||||
|
||||
async def has_update(self, model_type: str, model_id: int) -> bool:
|
||||
"""Determine if a model has updates pending."""
|
||||
|
||||
record = await self.get_record(model_type, model_id)
|
||||
return record.has_update() if record else False
|
||||
|
||||
async def _refresh_single_model(
|
||||
self,
|
||||
model_type: str,
|
||||
model_id: int,
|
||||
local_versions: Sequence[int],
|
||||
metadata_provider,
|
||||
*,
|
||||
force_refresh: bool = False,
|
||||
) -> Optional[ModelUpdateRecord]:
|
||||
normalized_local = self._normalize_sequence(local_versions)
|
||||
now = time.time()
|
||||
async with self._lock:
|
||||
existing = self._get_record(model_type, model_id)
|
||||
if existing and existing.should_ignore and not force_refresh:
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=existing.largest_version_id,
|
||||
version_ids=list(existing.version_ids),
|
||||
in_library_version_ids=normalized_local,
|
||||
last_checked_at=existing.last_checked_at,
|
||||
should_ignore=True,
|
||||
)
|
||||
self._upsert_record(record)
|
||||
return record
|
||||
|
||||
should_fetch = force_refresh or not existing or self._is_stale(existing, now)
|
||||
# release lock during network request
|
||||
fetched_versions: List[int] | None = None
|
||||
refresh_succeeded = False
|
||||
if metadata_provider and should_fetch:
|
||||
try:
|
||||
response = await metadata_provider.get_model_versions(model_id)
|
||||
except RateLimitError:
|
||||
raise
|
||||
except Exception as exc: # pragma: no cover - defensive log
|
||||
logger.error(
|
||||
"Failed to fetch versions for model %s (%s): %s",
|
||||
model_id,
|
||||
model_type,
|
||||
exc,
|
||||
exc_info=True,
|
||||
)
|
||||
else:
|
||||
if response is not None:
|
||||
extracted = self._extract_version_ids(response)
|
||||
if extracted is not None:
|
||||
fetched_versions = extracted
|
||||
refresh_succeeded = True
|
||||
|
||||
async with self._lock:
|
||||
existing = self._get_record(model_type, model_id)
|
||||
if existing and existing.should_ignore and not force_refresh:
|
||||
# Ignore state could have flipped while awaiting provider
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=existing.largest_version_id,
|
||||
version_ids=list(existing.version_ids),
|
||||
in_library_version_ids=normalized_local,
|
||||
last_checked_at=existing.last_checked_at,
|
||||
should_ignore=True,
|
||||
)
|
||||
self._upsert_record(record)
|
||||
return record
|
||||
|
||||
version_ids = (
|
||||
fetched_versions
|
||||
if refresh_succeeded
|
||||
else (list(existing.version_ids) if existing else [])
|
||||
)
|
||||
largest = max(version_ids) if version_ids else None
|
||||
last_checked = now if refresh_succeeded else (
|
||||
existing.last_checked_at if existing else None
|
||||
)
|
||||
record = ModelUpdateRecord(
|
||||
model_type=model_type,
|
||||
model_id=model_id,
|
||||
largest_version_id=largest,
|
||||
version_ids=version_ids,
|
||||
in_library_version_ids=normalized_local,
|
||||
last_checked_at=last_checked,
|
||||
should_ignore=existing.should_ignore if existing else False,
|
||||
)
|
||||
self._upsert_record(record)
|
||||
return record
|
||||
|
||||
async def _collect_local_versions(self, scanner) -> Dict[int, List[int]]:
|
||||
cache = await scanner.get_cached_data()
|
||||
mapping: Dict[int, set[int]] = {}
|
||||
if not cache or not getattr(cache, "raw_data", None):
|
||||
return {}
|
||||
|
||||
for item in cache.raw_data:
|
||||
civitai = item.get("civitai") if isinstance(item, dict) else None
|
||||
if not isinstance(civitai, dict):
|
||||
continue
|
||||
model_id = self._normalize_int(civitai.get("modelId"))
|
||||
version_id = self._normalize_int(civitai.get("id"))
|
||||
if model_id is None or version_id is None:
|
||||
continue
|
||||
mapping.setdefault(model_id, set()).add(version_id)
|
||||
|
||||
return {model_id: sorted(ids) for model_id, ids in mapping.items()}
|
||||
|
||||
def _is_stale(self, record: ModelUpdateRecord, now: float) -> bool:
|
||||
if record.last_checked_at is None:
|
||||
return True
|
||||
return (now - record.last_checked_at) >= self._ttl_seconds
|
||||
|
||||
@staticmethod
|
||||
def _normalize_int(value) -> Optional[int]:
|
||||
try:
|
||||
if value is None:
|
||||
return None
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
def _normalize_sequence(self, values: Sequence[int]) -> List[int]:
|
||||
normalized = [
|
||||
item
|
||||
for item in (self._normalize_int(value) for value in values)
|
||||
if item is not None
|
||||
]
|
||||
return sorted(dict.fromkeys(normalized))
|
||||
|
||||
def _extract_version_ids(self, response) -> Optional[List[int]]:
|
||||
if not isinstance(response, Mapping):
|
||||
return None
|
||||
versions = response.get("modelVersions")
|
||||
if versions is None:
|
||||
return []
|
||||
if not isinstance(versions, Iterable):
|
||||
return None
|
||||
normalized = []
|
||||
for entry in versions:
|
||||
if isinstance(entry, Mapping):
|
||||
normalized_id = self._normalize_int(entry.get("id"))
|
||||
else:
|
||||
normalized_id = self._normalize_int(entry)
|
||||
if normalized_id is not None:
|
||||
normalized.append(normalized_id)
|
||||
return sorted(dict.fromkeys(normalized))
|
||||
|
||||
def _get_record(self, model_type: str, model_id: int) -> Optional[ModelUpdateRecord]:
|
||||
with self._connect() as conn:
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT model_type, model_id, largest_version_id, version_ids,
|
||||
in_library_version_ids, last_checked_at, should_ignore
|
||||
FROM model_update_status
|
||||
WHERE model_type = ? AND model_id = ?
|
||||
""",
|
||||
(model_type, model_id),
|
||||
).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return ModelUpdateRecord(
|
||||
model_type=row["model_type"],
|
||||
model_id=int(row["model_id"]),
|
||||
largest_version_id=self._normalize_int(row["largest_version_id"]),
|
||||
version_ids=self._deserialize_json_array(row["version_ids"]),
|
||||
in_library_version_ids=self._deserialize_json_array(
|
||||
row["in_library_version_ids"]
|
||||
),
|
||||
last_checked_at=row["last_checked_at"],
|
||||
should_ignore=bool(row["should_ignore"]),
|
||||
)
|
||||
|
||||
def _upsert_record(self, record: ModelUpdateRecord) -> None:
|
||||
payload = (
|
||||
record.model_type,
|
||||
record.model_id,
|
||||
record.largest_version_id,
|
||||
json.dumps(record.version_ids),
|
||||
json.dumps(record.in_library_version_ids),
|
||||
record.last_checked_at,
|
||||
1 if record.should_ignore else 0,
|
||||
)
|
||||
with self._connect() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO model_update_status (
|
||||
model_type, model_id, largest_version_id, version_ids,
|
||||
in_library_version_ids, last_checked_at, should_ignore
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(model_type, model_id) DO UPDATE SET
|
||||
largest_version_id = excluded.largest_version_id,
|
||||
version_ids = excluded.version_ids,
|
||||
in_library_version_ids = excluded.in_library_version_ids,
|
||||
last_checked_at = excluded.last_checked_at,
|
||||
should_ignore = excluded.should_ignore
|
||||
""",
|
||||
payload,
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
@staticmethod
|
||||
def _deserialize_json_array(value) -> List[int]:
|
||||
if not value:
|
||||
return []
|
||||
try:
|
||||
data = json.loads(value)
|
||||
except (TypeError, json.JSONDecodeError):
|
||||
return []
|
||||
if isinstance(data, list):
|
||||
normalized = []
|
||||
for entry in data:
|
||||
try:
|
||||
normalized.append(int(entry))
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
return sorted(dict.fromkeys(normalized))
|
||||
return []
|
||||
|
||||
@@ -25,6 +25,34 @@ class PersistentModelCache:
|
||||
"""Persist core model metadata and hash index data in SQLite."""
|
||||
|
||||
_DEFAULT_FILENAME = "model_cache.sqlite"
|
||||
_MODEL_COLUMNS: Tuple[str, ...] = (
|
||||
"model_type",
|
||||
"file_path",
|
||||
"file_name",
|
||||
"model_name",
|
||||
"folder",
|
||||
"size",
|
||||
"modified",
|
||||
"sha256",
|
||||
"base_model",
|
||||
"preview_url",
|
||||
"preview_nsfw_level",
|
||||
"from_civitai",
|
||||
"favorite",
|
||||
"notes",
|
||||
"usage_tips",
|
||||
"metadata_source",
|
||||
"civitai_id",
|
||||
"civitai_model_id",
|
||||
"civitai_name",
|
||||
"civitai_creator_username",
|
||||
"trained_words",
|
||||
"civitai_deleted",
|
||||
"exclude",
|
||||
"db_checked",
|
||||
"last_checked_at",
|
||||
)
|
||||
_MODEL_UPDATE_COLUMNS: Tuple[str, ...] = _MODEL_COLUMNS[2:]
|
||||
_instances: Dict[str, "PersistentModelCache"] = {}
|
||||
_instance_lock = threading.Lock()
|
||||
|
||||
@@ -53,6 +81,11 @@ class PersistentModelCache:
|
||||
def is_enabled(self) -> bool:
|
||||
return os.environ.get("LORA_MANAGER_DISABLE_PERSISTENT_CACHE", "0") != "1"
|
||||
|
||||
def get_database_path(self) -> str:
|
||||
"""Expose the resolved SQLite database path."""
|
||||
|
||||
return self._db_path
|
||||
|
||||
def load_cache(self, model_type: str) -> Optional[PersistedCacheData]:
|
||||
if not self.is_enabled():
|
||||
return None
|
||||
@@ -64,12 +97,9 @@ class PersistentModelCache:
|
||||
with self._db_lock:
|
||||
conn = self._connect(readonly=True)
|
||||
try:
|
||||
model_columns_sql = ", ".join(self._MODEL_COLUMNS[1:])
|
||||
rows = conn.execute(
|
||||
"SELECT file_path, file_name, model_name, folder, size, modified, sha256, base_model,"
|
||||
" preview_url, preview_nsfw_level, from_civitai, favorite, notes, usage_tips,"
|
||||
" civitai_id, civitai_model_id, civitai_name, trained_words, exclude, db_checked,"
|
||||
" last_checked_at"
|
||||
" FROM models WHERE model_type = ?",
|
||||
f"SELECT {model_columns_sql} FROM models WHERE model_type = ?",
|
||||
(model_type,),
|
||||
).fetchall()
|
||||
|
||||
@@ -101,8 +131,12 @@ class PersistentModelCache:
|
||||
except json.JSONDecodeError:
|
||||
trained_words = []
|
||||
|
||||
creator_username = row["civitai_creator_username"]
|
||||
civitai: Optional[Dict] = None
|
||||
if any(row[col] is not None for col in ("civitai_id", "civitai_model_id", "civitai_name")):
|
||||
civitai_has_data = any(
|
||||
row[col] is not None for col in ("civitai_id", "civitai_model_id", "civitai_name")
|
||||
) or trained_words or creator_username
|
||||
if civitai_has_data:
|
||||
civitai = {}
|
||||
if row["civitai_id"] is not None:
|
||||
civitai["id"] = row["civitai_id"]
|
||||
@@ -112,6 +146,8 @@ class PersistentModelCache:
|
||||
civitai["name"] = row["civitai_name"]
|
||||
if trained_words:
|
||||
civitai["trainedWords"] = trained_words
|
||||
if creator_username:
|
||||
civitai.setdefault("creator", {})["username"] = creator_username
|
||||
|
||||
item = {
|
||||
"file_path": file_path,
|
||||
@@ -128,11 +164,13 @@ class PersistentModelCache:
|
||||
"favorite": bool(row["favorite"]),
|
||||
"notes": row["notes"] or "",
|
||||
"usage_tips": row["usage_tips"] or "",
|
||||
"metadata_source": row["metadata_source"] or None,
|
||||
"exclude": bool(row["exclude"]),
|
||||
"db_checked": bool(row["db_checked"]),
|
||||
"last_checked_at": row["last_checked_at"] or 0.0,
|
||||
"tags": tags.get(file_path, []),
|
||||
"civitai": civitai,
|
||||
"civitai_deleted": bool(row["civitai_deleted"]),
|
||||
}
|
||||
raw_data.append(item)
|
||||
|
||||
@@ -159,45 +197,190 @@ class PersistentModelCache:
|
||||
conn = self._connect()
|
||||
try:
|
||||
conn.execute("PRAGMA foreign_keys = ON")
|
||||
conn.execute("DELETE FROM models WHERE model_type = ?", (model_type,))
|
||||
conn.execute("DELETE FROM model_tags WHERE model_type = ?", (model_type,))
|
||||
conn.execute("DELETE FROM hash_index WHERE model_type = ?", (model_type,))
|
||||
conn.execute("DELETE FROM excluded_models WHERE model_type = ?", (model_type,))
|
||||
conn.execute("BEGIN")
|
||||
|
||||
model_rows = [self._prepare_model_row(model_type, item) for item in raw_data]
|
||||
conn.executemany(self._insert_model_sql(), model_rows)
|
||||
model_map: Dict[str, Tuple] = {
|
||||
row[1]: row for row in model_rows if row[1] # row[1] is file_path
|
||||
}
|
||||
|
||||
tag_rows = []
|
||||
existing_models = conn.execute(
|
||||
"SELECT "
|
||||
+ ", ".join(self._MODEL_COLUMNS[1:])
|
||||
+ " FROM models WHERE model_type = ?",
|
||||
(model_type,),
|
||||
).fetchall()
|
||||
existing_model_map: Dict[str, sqlite3.Row] = {
|
||||
row["file_path"]: row for row in existing_models
|
||||
}
|
||||
|
||||
to_remove_models = [
|
||||
(model_type, path)
|
||||
for path in existing_model_map.keys()
|
||||
if path not in model_map
|
||||
]
|
||||
if to_remove_models:
|
||||
conn.executemany(
|
||||
"DELETE FROM models WHERE model_type = ? AND file_path = ?",
|
||||
to_remove_models,
|
||||
)
|
||||
conn.executemany(
|
||||
"DELETE FROM model_tags WHERE model_type = ? AND file_path = ?",
|
||||
to_remove_models,
|
||||
)
|
||||
conn.executemany(
|
||||
"DELETE FROM hash_index WHERE model_type = ? AND file_path = ?",
|
||||
to_remove_models,
|
||||
)
|
||||
conn.executemany(
|
||||
"DELETE FROM excluded_models WHERE model_type = ? AND file_path = ?",
|
||||
to_remove_models,
|
||||
)
|
||||
|
||||
insert_rows: List[Tuple] = []
|
||||
update_rows: List[Tuple] = []
|
||||
|
||||
for file_path, row in model_map.items():
|
||||
existing = existing_model_map.get(file_path)
|
||||
if existing is None:
|
||||
insert_rows.append(row)
|
||||
continue
|
||||
|
||||
existing_values = tuple(
|
||||
existing[column] for column in self._MODEL_COLUMNS[1:]
|
||||
)
|
||||
current_values = row[1:]
|
||||
if existing_values != current_values:
|
||||
update_rows.append(row[2:] + (model_type, file_path))
|
||||
|
||||
if insert_rows:
|
||||
conn.executemany(self._insert_model_sql(), insert_rows)
|
||||
|
||||
if update_rows:
|
||||
set_clause = ", ".join(
|
||||
f"{column} = ?"
|
||||
for column in self._MODEL_UPDATE_COLUMNS
|
||||
)
|
||||
update_sql = (
|
||||
f"UPDATE models SET {set_clause} WHERE model_type = ? AND file_path = ?"
|
||||
)
|
||||
conn.executemany(update_sql, update_rows)
|
||||
|
||||
existing_tags_rows = conn.execute(
|
||||
"SELECT file_path, tag FROM model_tags WHERE model_type = ?",
|
||||
(model_type,),
|
||||
).fetchall()
|
||||
existing_tags: Dict[str, set] = {}
|
||||
for row in existing_tags_rows:
|
||||
existing_tags.setdefault(row["file_path"], set()).add(row["tag"])
|
||||
|
||||
new_tags: Dict[str, set] = {}
|
||||
for item in raw_data:
|
||||
file_path = item.get("file_path")
|
||||
if not file_path:
|
||||
continue
|
||||
for tag in item.get("tags") or []:
|
||||
tag_rows.append((model_type, file_path, tag))
|
||||
if tag_rows:
|
||||
tags = set(item.get("tags") or [])
|
||||
if tags:
|
||||
new_tags[file_path] = tags
|
||||
|
||||
tag_inserts: List[Tuple[str, str, str]] = []
|
||||
tag_deletes: List[Tuple[str, str, str]] = []
|
||||
|
||||
all_tag_paths = set(existing_tags.keys()) | set(new_tags.keys())
|
||||
for path in all_tag_paths:
|
||||
existing_set = existing_tags.get(path, set())
|
||||
new_set = new_tags.get(path, set())
|
||||
to_add = new_set - existing_set
|
||||
to_remove = existing_set - new_set
|
||||
|
||||
for tag in to_add:
|
||||
tag_inserts.append((model_type, path, tag))
|
||||
for tag in to_remove:
|
||||
tag_deletes.append((model_type, path, tag))
|
||||
|
||||
if tag_deletes:
|
||||
conn.executemany(
|
||||
"DELETE FROM model_tags WHERE model_type = ? AND file_path = ? AND tag = ?",
|
||||
tag_deletes,
|
||||
)
|
||||
if tag_inserts:
|
||||
conn.executemany(
|
||||
"INSERT INTO model_tags (model_type, file_path, tag) VALUES (?, ?, ?)",
|
||||
tag_rows,
|
||||
tag_inserts,
|
||||
)
|
||||
|
||||
hash_rows: List[Tuple[str, str, str]] = []
|
||||
existing_hash_rows = conn.execute(
|
||||
"SELECT sha256, file_path FROM hash_index WHERE model_type = ?",
|
||||
(model_type,),
|
||||
).fetchall()
|
||||
existing_hash_map: Dict[str, set] = {}
|
||||
for row in existing_hash_rows:
|
||||
sha_value = (row["sha256"] or "").lower()
|
||||
if not sha_value:
|
||||
continue
|
||||
existing_hash_map.setdefault(sha_value, set()).add(row["file_path"])
|
||||
|
||||
new_hash_map: Dict[str, set] = {}
|
||||
for sha_value, paths in hash_index.items():
|
||||
normalized_sha = (sha_value or "").lower()
|
||||
if not normalized_sha:
|
||||
continue
|
||||
bucket = new_hash_map.setdefault(normalized_sha, set())
|
||||
for path in paths:
|
||||
if not sha_value or not path:
|
||||
continue
|
||||
hash_rows.append((model_type, sha_value.lower(), path))
|
||||
if hash_rows:
|
||||
if path:
|
||||
bucket.add(path)
|
||||
|
||||
hash_inserts: List[Tuple[str, str, str]] = []
|
||||
hash_deletes: List[Tuple[str, str, str]] = []
|
||||
|
||||
all_shas = set(existing_hash_map.keys()) | set(new_hash_map.keys())
|
||||
for sha_value in all_shas:
|
||||
existing_paths = existing_hash_map.get(sha_value, set())
|
||||
new_paths = new_hash_map.get(sha_value, set())
|
||||
|
||||
for path in existing_paths - new_paths:
|
||||
hash_deletes.append((model_type, sha_value, path))
|
||||
for path in new_paths - existing_paths:
|
||||
hash_inserts.append((model_type, sha_value, path))
|
||||
|
||||
if hash_deletes:
|
||||
conn.executemany(
|
||||
"DELETE FROM hash_index WHERE model_type = ? AND sha256 = ? AND file_path = ?",
|
||||
hash_deletes,
|
||||
)
|
||||
if hash_inserts:
|
||||
conn.executemany(
|
||||
"INSERT OR IGNORE INTO hash_index (model_type, sha256, file_path) VALUES (?, ?, ?)",
|
||||
hash_rows,
|
||||
hash_inserts,
|
||||
)
|
||||
|
||||
excluded_rows = [(model_type, path) for path in excluded_models]
|
||||
if excluded_rows:
|
||||
existing_excluded_rows = conn.execute(
|
||||
"SELECT file_path FROM excluded_models WHERE model_type = ?",
|
||||
(model_type,),
|
||||
).fetchall()
|
||||
existing_excluded = {row["file_path"] for row in existing_excluded_rows}
|
||||
new_excluded = {path for path in excluded_models if path}
|
||||
|
||||
excluded_deletes = [
|
||||
(model_type, path)
|
||||
for path in existing_excluded - new_excluded
|
||||
]
|
||||
excluded_inserts = [
|
||||
(model_type, path)
|
||||
for path in new_excluded - existing_excluded
|
||||
]
|
||||
|
||||
if excluded_deletes:
|
||||
conn.executemany(
|
||||
"DELETE FROM excluded_models WHERE model_type = ? AND file_path = ?",
|
||||
excluded_deletes,
|
||||
)
|
||||
if excluded_inserts:
|
||||
conn.executemany(
|
||||
"INSERT OR IGNORE INTO excluded_models (model_type, file_path) VALUES (?, ?)",
|
||||
excluded_rows,
|
||||
excluded_inserts,
|
||||
)
|
||||
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
@@ -248,10 +431,13 @@ class PersistentModelCache:
|
||||
favorite INTEGER,
|
||||
notes TEXT,
|
||||
usage_tips TEXT,
|
||||
metadata_source TEXT,
|
||||
civitai_id INTEGER,
|
||||
civitai_model_id INTEGER,
|
||||
civitai_name TEXT,
|
||||
civitai_creator_username TEXT,
|
||||
trained_words TEXT,
|
||||
civitai_deleted INTEGER,
|
||||
exclude INTEGER,
|
||||
db_checked INTEGER,
|
||||
last_checked_at REAL,
|
||||
@@ -279,11 +465,31 @@ class PersistentModelCache:
|
||||
);
|
||||
"""
|
||||
)
|
||||
self._ensure_additional_model_columns(conn)
|
||||
conn.commit()
|
||||
self._schema_initialized = True
|
||||
except Exception as exc: # pragma: no cover - defensive guard
|
||||
logger.warning("Failed to initialize persistent cache schema: %s", exc)
|
||||
|
||||
def _ensure_additional_model_columns(self, conn: sqlite3.Connection) -> None:
|
||||
try:
|
||||
existing_columns = {
|
||||
row["name"]
|
||||
for row in conn.execute("PRAGMA table_info(models)").fetchall()
|
||||
}
|
||||
except Exception: # pragma: no cover - defensive guard
|
||||
return
|
||||
|
||||
required_columns = {
|
||||
"metadata_source": "TEXT",
|
||||
"civitai_creator_username": "TEXT",
|
||||
"civitai_deleted": "INTEGER DEFAULT 0",
|
||||
}
|
||||
|
||||
for column, definition in required_columns.items():
|
||||
if column not in existing_columns:
|
||||
conn.execute(f"ALTER TABLE models ADD COLUMN {column} {definition}")
|
||||
|
||||
def _connect(self, readonly: bool = False) -> sqlite3.Connection:
|
||||
uri = False
|
||||
path = self._db_path
|
||||
@@ -306,6 +512,12 @@ class PersistentModelCache:
|
||||
else:
|
||||
trained_words_json = json.dumps(trained_words)
|
||||
|
||||
metadata_source = item.get("metadata_source") or None
|
||||
creator_username = None
|
||||
creator_data = civitai.get("creator") if isinstance(civitai, dict) else None
|
||||
if isinstance(creator_data, dict):
|
||||
creator_username = creator_data.get("username") or None
|
||||
|
||||
return (
|
||||
model_type,
|
||||
item.get("file_path"),
|
||||
@@ -322,22 +534,22 @@ class PersistentModelCache:
|
||||
1 if item.get("favorite") else 0,
|
||||
item.get("notes"),
|
||||
item.get("usage_tips"),
|
||||
metadata_source,
|
||||
civitai.get("id"),
|
||||
civitai.get("modelId"),
|
||||
civitai.get("name"),
|
||||
creator_username,
|
||||
trained_words_json,
|
||||
1 if item.get("civitai_deleted") else 0,
|
||||
1 if item.get("exclude") else 0,
|
||||
1 if item.get("db_checked") else 0,
|
||||
float(item.get("last_checked_at") or 0.0),
|
||||
)
|
||||
|
||||
def _insert_model_sql(self) -> str:
|
||||
return (
|
||||
"INSERT INTO models (model_type, file_path, file_name, model_name, folder, size, modified, sha256,"
|
||||
" base_model, preview_url, preview_nsfw_level, from_civitai, favorite, notes, usage_tips,"
|
||||
" civitai_id, civitai_model_id, civitai_name, trained_words, exclude, db_checked, last_checked_at)"
|
||||
" VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"
|
||||
)
|
||||
columns = ", ".join(self._MODEL_COLUMNS)
|
||||
placeholders = ", ".join(["?"] * len(self._MODEL_COLUMNS))
|
||||
return f"INSERT INTO models ({columns}) VALUES ({placeholders})"
|
||||
|
||||
def _load_tags(self, conn: sqlite3.Connection, model_type: str) -> Dict[str, List[str]]:
|
||||
tag_rows = conn.execute(
|
||||
@@ -351,7 +563,7 @@ class PersistentModelCache:
|
||||
|
||||
|
||||
def get_persistent_cache() -> PersistentModelCache:
|
||||
from .settings_manager import settings as settings_service # Local import to avoid cycles
|
||||
from .settings_manager import get_settings_manager # Local import to avoid cycles
|
||||
|
||||
library_name = settings_service.get_active_library_name()
|
||||
library_name = get_settings_manager().get_active_library_name()
|
||||
return PersistentModelCache.get_default(library_name)
|
||||
|
||||
@@ -5,8 +5,10 @@ from __future__ import annotations
|
||||
import logging
|
||||
import os
|
||||
from typing import Awaitable, Callable, Dict, Optional, Sequence
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH, PREVIEW_EXTENSIONS
|
||||
from ..utils.civitai_utils import rewrite_preview_url
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -45,23 +47,59 @@ class PreviewAssetService:
|
||||
base_name = os.path.splitext(os.path.splitext(os.path.basename(metadata_path))[0])[0]
|
||||
preview_dir = os.path.dirname(metadata_path)
|
||||
is_video = first_preview.get("type") == "video"
|
||||
preview_url = first_preview.get("url")
|
||||
|
||||
if not preview_url:
|
||||
return
|
||||
|
||||
def extension_from_url(url: str, fallback: str) -> str:
|
||||
try:
|
||||
parsed = urlparse(url)
|
||||
except ValueError:
|
||||
return fallback
|
||||
ext = os.path.splitext(parsed.path)[1]
|
||||
return ext or fallback
|
||||
|
||||
downloader = await self._downloader_factory()
|
||||
|
||||
if is_video:
|
||||
extension = ".mp4"
|
||||
extension = extension_from_url(preview_url, ".mp4")
|
||||
preview_path = os.path.join(preview_dir, base_name + extension)
|
||||
downloader = await self._downloader_factory()
|
||||
success, result = await downloader.download_file(
|
||||
first_preview["url"], preview_path, use_auth=False
|
||||
)
|
||||
if success:
|
||||
local_metadata["preview_url"] = preview_path.replace(os.sep, "/")
|
||||
local_metadata["preview_nsfw_level"] = first_preview.get("nsfwLevel", 0)
|
||||
rewritten_url, rewritten = rewrite_preview_url(preview_url, media_type="video")
|
||||
|
||||
attempt_urls = []
|
||||
if rewritten:
|
||||
attempt_urls.append(rewritten_url)
|
||||
attempt_urls.append(preview_url)
|
||||
|
||||
seen: set[str] = set()
|
||||
for candidate in attempt_urls:
|
||||
if not candidate or candidate in seen:
|
||||
continue
|
||||
seen.add(candidate)
|
||||
|
||||
success, _ = await downloader.download_file(candidate, preview_path, use_auth=False)
|
||||
if success:
|
||||
local_metadata["preview_url"] = preview_path.replace(os.sep, "/")
|
||||
local_metadata["preview_nsfw_level"] = first_preview.get("nsfwLevel", 0)
|
||||
return
|
||||
else:
|
||||
rewritten_url, rewritten = rewrite_preview_url(preview_url, media_type="image")
|
||||
if rewritten:
|
||||
extension = extension_from_url(preview_url, ".png")
|
||||
preview_path = os.path.join(preview_dir, base_name + extension)
|
||||
success, _ = await downloader.download_file(
|
||||
rewritten_url, preview_path, use_auth=False
|
||||
)
|
||||
if success:
|
||||
local_metadata["preview_url"] = preview_path.replace(os.sep, "/")
|
||||
local_metadata["preview_nsfw_level"] = first_preview.get("nsfwLevel", 0)
|
||||
return
|
||||
|
||||
extension = ".webp"
|
||||
preview_path = os.path.join(preview_dir, base_name + extension)
|
||||
downloader = await self._downloader_factory()
|
||||
success, content, _headers = await downloader.download_to_memory(
|
||||
first_preview["url"], use_auth=False
|
||||
preview_url, use_auth=False
|
||||
)
|
||||
if not success:
|
||||
return
|
||||
|
||||
@@ -61,7 +61,7 @@ class RecipeSharingService:
|
||||
|
||||
safe_title = recipe.get("title", "").replace(" ", "_").lower()
|
||||
filename = f"recipe_{safe_title}{ext}" if safe_title else f"recipe_{recipe_id}{ext}"
|
||||
url_path = f"/api/recipe/{recipe_id}/share/download?t={timestamp}"
|
||||
url_path = f"/api/lm/recipe/{recipe_id}/share/download?t={timestamp}"
|
||||
return SharingResult({"success": True, "download_url": url_path, "filename": filename})
|
||||
|
||||
async def prepare_download(self, *, recipe_scanner, recipe_id: str) -> DownloadInfo:
|
||||
|
||||
@@ -128,6 +128,49 @@ class ServiceRegistry:
|
||||
async def get_civitai_client(cls):
|
||||
"""Get or create CivitAI client instance"""
|
||||
service_name = "civitai_client"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
# Double-check after acquiring lock
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from .civitai_client import CivitaiClient
|
||||
|
||||
client = await CivitaiClient.get_instance()
|
||||
cls._services[service_name] = client
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return client
|
||||
|
||||
@classmethod
|
||||
async def get_model_update_service(cls):
|
||||
"""Get or create the model update tracking service."""
|
||||
|
||||
service_name = "model_update_service"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
from .model_update_service import ModelUpdateService
|
||||
from .persistent_model_cache import get_persistent_cache
|
||||
|
||||
cache = get_persistent_cache()
|
||||
service = ModelUpdateService(cache.get_database_path())
|
||||
cls._services[service_name] = service
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return service
|
||||
|
||||
@classmethod
|
||||
async def get_civarchive_client(cls):
|
||||
"""Get or create CivArchive client instance"""
|
||||
service_name = "civarchive_client"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
@@ -138,9 +181,9 @@ class ServiceRegistry:
|
||||
return cls._services[service_name]
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from .civitai_client import CivitaiClient
|
||||
from .civarchive_client import CivArchiveClient
|
||||
|
||||
client = await CivitaiClient.get_instance()
|
||||
client = await CivArchiveClient.get_instance()
|
||||
cls._services[service_name] = client
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return client
|
||||
|
||||
@@ -3,9 +3,17 @@ import json
|
||||
import os
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, Iterable, List, Mapping, Optional
|
||||
from threading import Lock
|
||||
from typing import Any, Dict, Iterable, List, Mapping, Optional, Sequence
|
||||
|
||||
from ..utils.constants import DEFAULT_PRIORITY_TAG_CONFIG
|
||||
from ..utils.settings_paths import ensure_settings_file
|
||||
from ..utils.tag_priorities import (
|
||||
PriorityTagEntry,
|
||||
collect_canonical_tags,
|
||||
parse_priority_tag_string,
|
||||
resolve_priority_tag,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -35,6 +43,8 @@ DEFAULT_SETTINGS: Dict[str, Any] = {
|
||||
"card_info_display": "always",
|
||||
"include_trigger_words": False,
|
||||
"compact_mode": False,
|
||||
"priority_tags": DEFAULT_PRIORITY_TAG_CONFIG.copy(),
|
||||
"model_name_display": "model_name",
|
||||
}
|
||||
|
||||
|
||||
@@ -62,6 +72,12 @@ class SettingsManager:
|
||||
def _ensure_default_settings(self) -> None:
|
||||
"""Ensure all default settings keys exist"""
|
||||
updated = False
|
||||
normalized_priority = self._normalize_priority_tag_config(
|
||||
self.settings.get("priority_tags")
|
||||
)
|
||||
if normalized_priority != self.settings.get("priority_tags"):
|
||||
self.settings["priority_tags"] = normalized_priority
|
||||
updated = True
|
||||
for key, value in self._get_default_settings().items():
|
||||
if key not in self.settings:
|
||||
if isinstance(value, dict):
|
||||
@@ -384,8 +400,56 @@ class SettingsManager:
|
||||
# Ensure nested dicts are independent copies
|
||||
defaults['base_model_path_mappings'] = {}
|
||||
defaults['download_path_templates'] = {}
|
||||
defaults['priority_tags'] = DEFAULT_PRIORITY_TAG_CONFIG.copy()
|
||||
return defaults
|
||||
|
||||
def _normalize_priority_tag_config(self, value: Any) -> Dict[str, str]:
|
||||
normalized: Dict[str, str] = {}
|
||||
if isinstance(value, Mapping):
|
||||
for key, raw in value.items():
|
||||
if not isinstance(key, str) or not isinstance(raw, str):
|
||||
continue
|
||||
normalized[key] = raw.strip()
|
||||
|
||||
for model_type, default_value in DEFAULT_PRIORITY_TAG_CONFIG.items():
|
||||
normalized.setdefault(model_type, default_value)
|
||||
|
||||
return normalized
|
||||
|
||||
def get_priority_tag_config(self) -> Dict[str, str]:
|
||||
stored_value = self.settings.get("priority_tags")
|
||||
normalized = self._normalize_priority_tag_config(stored_value)
|
||||
if normalized != stored_value:
|
||||
self.settings["priority_tags"] = normalized
|
||||
self._save_settings()
|
||||
return normalized.copy()
|
||||
|
||||
def get_priority_tag_entries(self, model_type: str) -> List[PriorityTagEntry]:
|
||||
config = self.get_priority_tag_config()
|
||||
raw_config = config.get(model_type, "")
|
||||
return parse_priority_tag_string(raw_config)
|
||||
|
||||
def resolve_priority_tag_for_model(
|
||||
self, tags: Sequence[str] | Iterable[str], model_type: str
|
||||
) -> str:
|
||||
entries = self.get_priority_tag_entries(model_type)
|
||||
resolved = resolve_priority_tag(tags, entries)
|
||||
if resolved:
|
||||
return resolved
|
||||
|
||||
for tag in tags:
|
||||
if isinstance(tag, str) and tag:
|
||||
return tag
|
||||
return ""
|
||||
|
||||
def get_priority_tag_suggestions(self) -> Dict[str, List[str]]:
|
||||
suggestions: Dict[str, List[str]] = {}
|
||||
config = self.get_priority_tag_config()
|
||||
for model_type, raw_value in config.items():
|
||||
entries = parse_priority_tag_string(raw_value)
|
||||
suggestions[model_type] = collect_canonical_tags(entries)
|
||||
return suggestions
|
||||
|
||||
def get(self, key: str, default: Any = None) -> Any:
|
||||
"""Get setting value"""
|
||||
return self.settings.get(key, default)
|
||||
@@ -688,4 +752,38 @@ class SettingsManager:
|
||||
|
||||
return templates.get(model_type, '{base_model}/{first_tag}')
|
||||
|
||||
settings = SettingsManager()
|
||||
|
||||
_SETTINGS_MANAGER: Optional["SettingsManager"] = None
|
||||
_SETTINGS_MANAGER_LOCK = Lock()
|
||||
# Legacy module-level alias for backwards compatibility with callers that
|
||||
# monkeypatch ``py.services.settings_manager.settings`` during tests.
|
||||
settings: Optional["SettingsManager"] = None
|
||||
|
||||
|
||||
def get_settings_manager() -> "SettingsManager":
|
||||
"""Return the lazily initialised global :class:`SettingsManager`."""
|
||||
|
||||
global _SETTINGS_MANAGER, settings
|
||||
if settings is not None:
|
||||
return settings
|
||||
|
||||
if _SETTINGS_MANAGER is None:
|
||||
with _SETTINGS_MANAGER_LOCK:
|
||||
if _SETTINGS_MANAGER is None:
|
||||
_SETTINGS_MANAGER = SettingsManager()
|
||||
|
||||
settings = _SETTINGS_MANAGER
|
||||
return _SETTINGS_MANAGER
|
||||
|
||||
|
||||
def reset_settings_manager() -> None:
|
||||
"""Reset the cached settings manager instance.
|
||||
|
||||
Primarily intended for tests so they can configure the settings
|
||||
directory before the manager touches the filesystem.
|
||||
"""
|
||||
|
||||
global _SETTINGS_MANAGER, settings
|
||||
with _SETTINGS_MANAGER_LOCK:
|
||||
_SETTINGS_MANAGER = None
|
||||
settings = None
|
||||
|
||||
@@ -6,6 +6,7 @@ import logging
|
||||
from typing import Any, Dict, Optional, Protocol, Sequence
|
||||
|
||||
from ..metadata_sync_service import MetadataSyncService
|
||||
from ...utils.metadata_manager import MetadataManager
|
||||
|
||||
|
||||
class MetadataRefreshProgressReporter(Protocol):
|
||||
@@ -70,6 +71,7 @@ class BulkMetadataRefreshUseCase:
|
||||
for model in to_process:
|
||||
try:
|
||||
original_name = model.get("model_name")
|
||||
await MetadataManager.hydrate_model_data(model)
|
||||
result, _ = await self._metadata_sync.fetch_and_update_model(
|
||||
sha256=model["sha256"],
|
||||
file_path=model["file_path"],
|
||||
|
||||
@@ -155,11 +155,21 @@ class WebSocketManager:
|
||||
|
||||
async def broadcast_download_progress(self, download_id: str, data: Dict):
|
||||
"""Send progress update to specific download client"""
|
||||
# Store simplified progress data in memory (only progress percentage)
|
||||
self._download_progress[download_id] = {
|
||||
progress_entry = {
|
||||
'progress': data.get('progress', 0),
|
||||
'timestamp': datetime.now()
|
||||
'timestamp': datetime.now(),
|
||||
}
|
||||
|
||||
for field in ('bytes_downloaded', 'total_bytes', 'bytes_per_second'):
|
||||
if field in data:
|
||||
progress_entry[field] = data[field]
|
||||
|
||||
if 'status' in data:
|
||||
progress_entry['status'] = data['status']
|
||||
if 'message' in data:
|
||||
progress_entry['message'] = data['message']
|
||||
|
||||
self._download_progress[download_id] = progress_entry
|
||||
|
||||
if download_id not in self._download_websockets:
|
||||
logger.debug(f"No WebSocket found for download ID: {download_id}")
|
||||
|
||||
47
py/utils/civitai_utils.py
Normal file
47
py/utils/civitai_utils.py
Normal file
@@ -0,0 +1,47 @@
|
||||
"""Utilities for working with Civitai assets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from urllib.parse import urlparse, urlunparse
|
||||
|
||||
|
||||
def rewrite_preview_url(source_url: str | None, media_type: str | None = None) -> tuple[str | None, bool]:
|
||||
"""Rewrite Civitai preview URLs to use optimized renditions.
|
||||
|
||||
Args:
|
||||
source_url: Original preview URL from the Civitai API.
|
||||
media_type: Optional media type hint (e.g. ``"image"`` or ``"video"``).
|
||||
|
||||
Returns:
|
||||
A tuple of the potentially rewritten URL and a flag indicating whether the
|
||||
replacement occurred. When the URL is not rewritten, the original value is
|
||||
returned with ``False``.
|
||||
"""
|
||||
if not source_url:
|
||||
return source_url, False
|
||||
|
||||
try:
|
||||
parsed = urlparse(source_url)
|
||||
except ValueError:
|
||||
return source_url, False
|
||||
|
||||
if parsed.netloc.lower() != "image.civitai.com":
|
||||
return source_url, False
|
||||
|
||||
replacement = "/width=450,optimized=true"
|
||||
if (media_type or "").lower() == "video":
|
||||
replacement = "/transcode=true,width=450,optimized=true"
|
||||
|
||||
if "/original=true" not in parsed.path:
|
||||
return source_url, False
|
||||
|
||||
updated_path = parsed.path.replace("/original=true", replacement, 1)
|
||||
if updated_path == parsed.path:
|
||||
return source_url, False
|
||||
|
||||
rewritten = urlunparse(parsed._replace(path=updated_path))
|
||||
return rewritten, True
|
||||
|
||||
|
||||
__all__ = ["rewrite_preview_url"]
|
||||
|
||||
@@ -48,6 +48,13 @@ SUPPORTED_MEDIA_EXTENSIONS = {
|
||||
# Valid Lora types
|
||||
VALID_LORA_TYPES = ['lora', 'locon', 'dora']
|
||||
|
||||
# Supported Civitai model types for user model queries (case-insensitive)
|
||||
CIVITAI_USER_MODEL_TYPES = [
|
||||
*VALID_LORA_TYPES,
|
||||
'textualinversion',
|
||||
'checkpoint',
|
||||
]
|
||||
|
||||
# Auto-organize settings
|
||||
AUTO_ORGANIZE_BATCH_SIZE = 50 # Process models in batches to avoid overwhelming the system
|
||||
|
||||
@@ -57,4 +64,11 @@ CIVITAI_MODEL_TAGS = [
|
||||
'realistic', 'anime', 'toon', 'furry', 'style',
|
||||
'poses', 'background', 'tool', 'vehicle', 'buildings',
|
||||
'objects', 'assets', 'animal', 'action'
|
||||
]
|
||||
]
|
||||
|
||||
# Default priority tag configuration strings for each model type
|
||||
DEFAULT_PRIORITY_TAG_CONFIG = {
|
||||
'lora': ', '.join(CIVITAI_MODEL_TAGS),
|
||||
'checkpoint': ', '.join(CIVITAI_MODEL_TAGS),
|
||||
'embedding': ', '.join(CIVITAI_MODEL_TAGS),
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@ from ..utils.metadata_manager import MetadataManager
|
||||
from .example_images_processor import ExampleImagesProcessor
|
||||
from .example_images_metadata import MetadataUpdater
|
||||
from ..services.downloader import get_downloader
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
|
||||
|
||||
class ExampleImagesDownloadError(RuntimeError):
|
||||
@@ -105,9 +105,10 @@ class DownloadManager:
|
||||
self._progress = _DownloadProgress()
|
||||
self._ws_manager = ws_manager
|
||||
self._state_lock = state_lock or asyncio.Lock()
|
||||
self._stop_requested = False
|
||||
|
||||
def _resolve_output_dir(self, library_name: str | None = None) -> str:
|
||||
base_path = settings.get('example_images_path')
|
||||
base_path = get_settings_manager().get('example_images_path')
|
||||
if not base_path:
|
||||
return ''
|
||||
return ensure_library_root_exists(library_name)
|
||||
@@ -126,7 +127,8 @@ class DownloadManager:
|
||||
model_types = data.get('model_types', ['lora', 'checkpoint'])
|
||||
delay = float(data.get('delay', 0.2))
|
||||
|
||||
base_path = settings.get('example_images_path')
|
||||
settings_manager = get_settings_manager()
|
||||
base_path = settings_manager.get('example_images_path')
|
||||
|
||||
if not base_path:
|
||||
error_msg = 'Example images path not configured in settings'
|
||||
@@ -138,12 +140,13 @@ class DownloadManager:
|
||||
}
|
||||
raise DownloadConfigurationError(error_msg)
|
||||
|
||||
active_library = settings.get_active_library_name()
|
||||
active_library = get_settings_manager().get_active_library_name()
|
||||
output_dir = self._resolve_output_dir(active_library)
|
||||
if not output_dir:
|
||||
raise DownloadConfigurationError('Example images path not configured in settings')
|
||||
|
||||
self._progress.reset()
|
||||
self._stop_requested = False
|
||||
self._progress['status'] = 'running'
|
||||
self._progress['start_time'] = time.time()
|
||||
self._progress['end_time'] = None
|
||||
@@ -151,7 +154,7 @@ class DownloadManager:
|
||||
progress_file = os.path.join(output_dir, '.download_progress.json')
|
||||
progress_source = progress_file
|
||||
if uses_library_scoped_folders():
|
||||
legacy_root = settings.get('example_images_path') or ''
|
||||
legacy_root = get_settings_manager().get('example_images_path') or ''
|
||||
legacy_progress = os.path.join(legacy_root, '.download_progress.json') if legacy_root else ''
|
||||
if legacy_progress and os.path.exists(legacy_progress) and not os.path.exists(progress_file):
|
||||
try:
|
||||
@@ -266,6 +269,27 @@ class DownloadManager:
|
||||
'success': True,
|
||||
'message': 'Download resumed'
|
||||
}
|
||||
|
||||
async def stop_download(self, request):
|
||||
"""Stop the example images download after the current model completes."""
|
||||
|
||||
async with self._state_lock:
|
||||
if not self._is_downloading:
|
||||
raise DownloadNotRunningError()
|
||||
|
||||
if self._progress['status'] in {'completed', 'error', 'stopped'}:
|
||||
raise DownloadNotRunningError()
|
||||
|
||||
if self._progress['status'] != 'stopping':
|
||||
self._stop_requested = True
|
||||
self._progress['status'] = 'stopping'
|
||||
|
||||
await self._broadcast_progress(status='stopping')
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': 'Download stopping'
|
||||
}
|
||||
|
||||
async def _download_all_example_images(
|
||||
self,
|
||||
@@ -310,6 +334,12 @@ class DownloadManager:
|
||||
|
||||
# Process each model
|
||||
for i, (scanner_type, model, scanner) in enumerate(all_models):
|
||||
async with self._state_lock:
|
||||
current_status = self._progress['status']
|
||||
|
||||
if current_status not in {'running', 'paused', 'stopping'}:
|
||||
break
|
||||
|
||||
# Main logic for processing model is here, but actual operations are delegated to other classes
|
||||
was_remote_download = await self._process_model(
|
||||
scanner_type,
|
||||
@@ -320,24 +350,59 @@ class DownloadManager:
|
||||
downloader,
|
||||
library_name,
|
||||
)
|
||||
|
||||
|
||||
# Update progress
|
||||
self._progress['completed'] += 1
|
||||
await self._broadcast_progress(status='running')
|
||||
|
||||
|
||||
async with self._state_lock:
|
||||
current_status = self._progress['status']
|
||||
should_stop = self._stop_requested and current_status == 'stopping'
|
||||
|
||||
broadcast_status = 'running' if current_status == 'running' else current_status
|
||||
await self._broadcast_progress(status=broadcast_status)
|
||||
|
||||
if should_stop:
|
||||
break
|
||||
|
||||
# Only add delay after remote download of models, and not after processing the last model
|
||||
if was_remote_download and i < len(all_models) - 1 and self._progress['status'] == 'running':
|
||||
if (
|
||||
was_remote_download
|
||||
and i < len(all_models) - 1
|
||||
and current_status == 'running'
|
||||
):
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
# Mark as completed
|
||||
self._progress['status'] = 'completed'
|
||||
self._progress['end_time'] = time.time()
|
||||
logger.debug(
|
||||
"Example images download completed: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
await self._broadcast_progress(status='completed')
|
||||
|
||||
async with self._state_lock:
|
||||
if self._stop_requested and self._progress['status'] == 'stopping':
|
||||
self._progress['status'] = 'stopped'
|
||||
self._progress['end_time'] = time.time()
|
||||
self._stop_requested = False
|
||||
final_status = 'stopped'
|
||||
elif self._progress['status'] not in {'error', 'stopped'}:
|
||||
self._progress['status'] = 'completed'
|
||||
self._progress['end_time'] = time.time()
|
||||
self._stop_requested = False
|
||||
final_status = 'completed'
|
||||
else:
|
||||
final_status = self._progress['status']
|
||||
self._stop_requested = False
|
||||
if self._progress['end_time'] is None:
|
||||
self._progress['end_time'] = time.time()
|
||||
|
||||
if final_status == 'completed':
|
||||
logger.debug(
|
||||
"Example images download completed: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
elif final_status == 'stopped':
|
||||
logger.debug(
|
||||
"Example images download stopped: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
|
||||
await self._broadcast_progress(status=final_status)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error during example images download: {str(e)}"
|
||||
@@ -359,6 +424,7 @@ class DownloadManager:
|
||||
async with self._state_lock:
|
||||
self._is_downloading = False
|
||||
self._download_task = None
|
||||
self._stop_requested = False
|
||||
|
||||
async def _process_model(
|
||||
self,
|
||||
@@ -377,7 +443,7 @@ class DownloadManager:
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Check if download should continue
|
||||
if self._progress['status'] != 'running':
|
||||
if self._progress['status'] not in {'running', 'stopping'}:
|
||||
logger.info(f"Download stopped: {self._progress['status']}")
|
||||
return False # Return False to indicate no remote download happened
|
||||
|
||||
@@ -555,16 +621,18 @@ class DownloadManager:
|
||||
if not model_hashes:
|
||||
raise DownloadConfigurationError('Missing model_hashes parameter')
|
||||
|
||||
base_path = settings.get('example_images_path')
|
||||
settings_manager = get_settings_manager()
|
||||
base_path = settings_manager.get('example_images_path')
|
||||
|
||||
if not base_path:
|
||||
raise DownloadConfigurationError('Example images path not configured in settings')
|
||||
active_library = settings.get_active_library_name()
|
||||
active_library = settings_manager.get_active_library_name()
|
||||
output_dir = self._resolve_output_dir(active_library)
|
||||
if not output_dir:
|
||||
raise DownloadConfigurationError('Example images path not configured in settings')
|
||||
|
||||
self._progress.reset()
|
||||
self._stop_requested = False
|
||||
self._progress['total'] = len(model_hashes)
|
||||
self._progress['status'] = 'running'
|
||||
self._progress['start_time'] = time.time()
|
||||
@@ -586,10 +654,15 @@ class DownloadManager:
|
||||
|
||||
async with self._state_lock:
|
||||
self._is_downloading = False
|
||||
final_status = self._progress['status']
|
||||
|
||||
message = 'Force download completed'
|
||||
if final_status == 'stopped':
|
||||
message = 'Force download stopped'
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'message': 'Force download completed',
|
||||
'message': message,
|
||||
'result': result
|
||||
}
|
||||
|
||||
@@ -647,6 +720,12 @@ class DownloadManager:
|
||||
# Process each model
|
||||
success_count = 0
|
||||
for i, (scanner_type, model, scanner) in enumerate(models_to_process):
|
||||
async with self._state_lock:
|
||||
current_status = self._progress['status']
|
||||
|
||||
if current_status not in {'running', 'paused', 'stopping'}:
|
||||
break
|
||||
|
||||
# Force process this model regardless of previous status
|
||||
was_successful = await self._process_specific_model(
|
||||
scanner_type,
|
||||
@@ -657,32 +736,65 @@ class DownloadManager:
|
||||
downloader,
|
||||
library_name,
|
||||
)
|
||||
|
||||
|
||||
if was_successful:
|
||||
success_count += 1
|
||||
|
||||
|
||||
# Update progress
|
||||
self._progress['completed'] += 1
|
||||
|
||||
async with self._state_lock:
|
||||
current_status = self._progress['status']
|
||||
should_stop = self._stop_requested and current_status == 'stopping'
|
||||
|
||||
broadcast_status = 'running' if current_status == 'running' else current_status
|
||||
# Send progress update via WebSocket
|
||||
await self._broadcast_progress(status='running')
|
||||
|
||||
await self._broadcast_progress(status=broadcast_status)
|
||||
|
||||
if should_stop:
|
||||
break
|
||||
|
||||
# Only add delay after remote download, and not after processing the last model
|
||||
if was_successful and i < len(models_to_process) - 1 and self._progress['status'] == 'running':
|
||||
if (
|
||||
was_successful
|
||||
and i < len(models_to_process) - 1
|
||||
and current_status == 'running'
|
||||
):
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
# Mark as completed
|
||||
self._progress['status'] = 'completed'
|
||||
self._progress['end_time'] = time.time()
|
||||
logger.debug(
|
||||
"Forced example images download completed: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
|
||||
async with self._state_lock:
|
||||
if self._stop_requested and self._progress['status'] == 'stopping':
|
||||
self._progress['status'] = 'stopped'
|
||||
self._progress['end_time'] = time.time()
|
||||
self._stop_requested = False
|
||||
final_status = 'stopped'
|
||||
elif self._progress['status'] not in {'error', 'stopped'}:
|
||||
self._progress['status'] = 'completed'
|
||||
self._progress['end_time'] = time.time()
|
||||
self._stop_requested = False
|
||||
final_status = 'completed'
|
||||
else:
|
||||
final_status = self._progress['status']
|
||||
self._stop_requested = False
|
||||
if self._progress['end_time'] is None:
|
||||
self._progress['end_time'] = time.time()
|
||||
|
||||
if final_status == 'completed':
|
||||
logger.debug(
|
||||
"Forced example images download completed: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
elif final_status == 'stopped':
|
||||
logger.debug(
|
||||
"Forced example images download stopped: %s/%s models processed",
|
||||
self._progress['completed'],
|
||||
self._progress['total'],
|
||||
)
|
||||
|
||||
# Send final progress via WebSocket
|
||||
await self._broadcast_progress(status='completed')
|
||||
|
||||
await self._broadcast_progress(status=final_status)
|
||||
|
||||
return {
|
||||
'total': self._progress['total'],
|
||||
'processed': self._progress['completed'],
|
||||
@@ -724,7 +836,7 @@ class DownloadManager:
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Check if download should continue
|
||||
if self._progress['status'] != 'running':
|
||||
if self._progress['status'] not in {'running', 'stopping'}:
|
||||
logger.info(f"Download stopped: {self._progress['status']}")
|
||||
return False
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ import os
|
||||
import sys
|
||||
import subprocess
|
||||
from aiohttp import web
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..utils.example_images_paths import (
|
||||
get_model_folder,
|
||||
get_model_relative_path,
|
||||
@@ -37,7 +37,8 @@ class ExampleImagesFileManager:
|
||||
}, status=400)
|
||||
|
||||
# Get example images path from settings
|
||||
example_images_path = settings.get('example_images_path')
|
||||
settings_manager = get_settings_manager()
|
||||
example_images_path = settings_manager.get('example_images_path')
|
||||
if not example_images_path:
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
@@ -109,7 +110,8 @@ class ExampleImagesFileManager:
|
||||
}, status=400)
|
||||
|
||||
# Get example images path from settings
|
||||
example_images_path = settings.get('example_images_path')
|
||||
settings_manager = get_settings_manager()
|
||||
example_images_path = settings_manager.get('example_images_path')
|
||||
if not example_images_path:
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
@@ -183,7 +185,8 @@ class ExampleImagesFileManager:
|
||||
}, status=400)
|
||||
|
||||
# Get example images path from settings
|
||||
example_images_path = settings.get('example_images_path')
|
||||
settings_manager = get_settings_manager()
|
||||
example_images_path = settings_manager.get('example_images_path')
|
||||
if not example_images_path:
|
||||
return web.json_response({
|
||||
'has_images': False
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||
|
||||
from ..recipes.constants import GEN_PARAM_KEYS
|
||||
from ..services.metadata_service import get_default_metadata_provider, get_metadata_provider
|
||||
from ..services.metadata_sync_service import MetadataSyncService
|
||||
from ..services.preview_asset_service import PreviewAssetService
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..services.downloader import get_downloader
|
||||
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
|
||||
from ..utils.exif_utils import ExifUtils
|
||||
@@ -20,13 +21,46 @@ _preview_service = PreviewAssetService(
|
||||
exif_utils=ExifUtils,
|
||||
)
|
||||
|
||||
_metadata_sync_service = MetadataSyncService(
|
||||
metadata_manager=MetadataManager,
|
||||
preview_service=_preview_service,
|
||||
settings=settings,
|
||||
default_metadata_provider_factory=get_default_metadata_provider,
|
||||
metadata_provider_selector=get_metadata_provider,
|
||||
)
|
||||
_metadata_sync_service: MetadataSyncService | None = None
|
||||
_metadata_sync_service_settings: Optional["SettingsManager"] = None
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover - import for type checkers only
|
||||
from ..services.settings_manager import SettingsManager
|
||||
|
||||
|
||||
def _build_metadata_sync_service(settings_manager: "SettingsManager") -> MetadataSyncService:
|
||||
"""Construct a metadata sync service bound to the provided settings."""
|
||||
|
||||
return MetadataSyncService(
|
||||
metadata_manager=MetadataManager,
|
||||
preview_service=_preview_service,
|
||||
settings=settings_manager,
|
||||
default_metadata_provider_factory=get_default_metadata_provider,
|
||||
metadata_provider_selector=get_metadata_provider,
|
||||
)
|
||||
|
||||
|
||||
def _get_metadata_sync_service() -> MetadataSyncService:
|
||||
"""Return the shared metadata sync service, initialising it lazily."""
|
||||
|
||||
global _metadata_sync_service, _metadata_sync_service_settings
|
||||
|
||||
settings_manager = get_settings_manager()
|
||||
|
||||
if isinstance(_metadata_sync_service, MetadataSyncService):
|
||||
if _metadata_sync_service_settings is not settings_manager:
|
||||
_metadata_sync_service = _build_metadata_sync_service(settings_manager)
|
||||
_metadata_sync_service_settings = settings_manager
|
||||
elif _metadata_sync_service is None:
|
||||
_metadata_sync_service = _build_metadata_sync_service(settings_manager)
|
||||
_metadata_sync_service_settings = settings_manager
|
||||
else:
|
||||
# Tests may inject stand-ins that do not match the sync service type. Preserve
|
||||
# those injections while still updating our cached settings reference so the
|
||||
# next real service instantiation uses the current configuration.
|
||||
_metadata_sync_service_settings = settings_manager
|
||||
|
||||
return _metadata_sync_service
|
||||
|
||||
|
||||
class MetadataUpdater:
|
||||
@@ -71,7 +105,8 @@ class MetadataUpdater:
|
||||
async def update_cache_func(old_path, new_path, metadata):
|
||||
return await scanner.update_single_model_cache(old_path, new_path, metadata)
|
||||
|
||||
success, error = await _metadata_sync_service.fetch_and_update_model(
|
||||
await MetadataManager.hydrate_model_data(model_data)
|
||||
success, error = await _get_metadata_sync_service().fetch_and_update_model(
|
||||
sha256=model_hash,
|
||||
file_path=file_path,
|
||||
model_data=model_data,
|
||||
@@ -151,16 +186,16 @@ class MetadataUpdater:
|
||||
if is_supported:
|
||||
local_images_paths.append(file_path)
|
||||
|
||||
await MetadataManager.hydrate_model_data(model)
|
||||
civitai_data = model.setdefault('civitai', {})
|
||||
|
||||
# Check if metadata update is needed (no civitai field or empty images)
|
||||
needs_update = not model.get('civitai') or not model.get('civitai', {}).get('images')
|
||||
needs_update = not civitai_data or not civitai_data.get('images')
|
||||
|
||||
if needs_update and local_images_paths:
|
||||
logger.debug(f"Found {len(local_images_paths)} local example images for {model.get('model_name')}, updating metadata")
|
||||
|
||||
# Create or get civitai field
|
||||
if not model.get('civitai'):
|
||||
model['civitai'] = {}
|
||||
|
||||
# Create images array
|
||||
images = []
|
||||
|
||||
@@ -195,16 +230,13 @@ class MetadataUpdater:
|
||||
images.append(image_entry)
|
||||
|
||||
# Update the model's civitai.images field
|
||||
model['civitai']['images'] = images
|
||||
civitai_data['images'] = images
|
||||
|
||||
# Save metadata to .metadata.json file
|
||||
file_path = model.get('file_path')
|
||||
try:
|
||||
# Create a copy of model data without 'folder' field
|
||||
model_copy = model.copy()
|
||||
model_copy.pop('folder', None)
|
||||
|
||||
# Write metadata to file
|
||||
await MetadataManager.save_metadata(file_path, model_copy)
|
||||
logger.info(f"Saved metadata for {model.get('model_name')}")
|
||||
except Exception as e:
|
||||
@@ -237,16 +269,18 @@ class MetadataUpdater:
|
||||
tuple: (regular_images, custom_images) - Both image arrays
|
||||
"""
|
||||
try:
|
||||
# Ensure civitai field exists in model_data
|
||||
if not model_data.get('civitai'):
|
||||
model_data['civitai'] = {}
|
||||
|
||||
# Ensure customImages array exists
|
||||
if not model_data['civitai'].get('customImages'):
|
||||
model_data['civitai']['customImages'] = []
|
||||
|
||||
# Get current customImages array
|
||||
custom_images = model_data['civitai']['customImages']
|
||||
await MetadataManager.hydrate_model_data(model_data)
|
||||
civitai_data = model_data.get('civitai')
|
||||
|
||||
if not isinstance(civitai_data, dict):
|
||||
civitai_data = {}
|
||||
model_data['civitai'] = civitai_data
|
||||
|
||||
custom_images = civitai_data.get('customImages')
|
||||
|
||||
if not isinstance(custom_images, list):
|
||||
custom_images = []
|
||||
civitai_data['customImages'] = custom_images
|
||||
|
||||
# Add new image entry for each imported file
|
||||
for path_tuple in newly_imported_paths:
|
||||
@@ -304,11 +338,8 @@ class MetadataUpdater:
|
||||
file_path = model_data.get('file_path')
|
||||
if file_path:
|
||||
try:
|
||||
# Create a copy of model data without 'folder' field
|
||||
model_copy = model_data.copy()
|
||||
model_copy.pop('folder', None)
|
||||
|
||||
# Write metadata to file
|
||||
await MetadataManager.save_metadata(file_path, model_copy)
|
||||
logger.info(f"Saved metadata for {model_data.get('model_name')}")
|
||||
except Exception as e:
|
||||
@@ -319,7 +350,7 @@ class MetadataUpdater:
|
||||
await scanner.update_single_model_cache(file_path, file_path, model_data)
|
||||
|
||||
# Get regular images array (might be None)
|
||||
regular_images = model_data['civitai'].get('images', [])
|
||||
regular_images = civitai_data.get('images', [])
|
||||
|
||||
# Return both image arrays
|
||||
return regular_images, custom_images
|
||||
@@ -420,4 +451,4 @@ class MetadataUpdater:
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing image metadata: {e}", exc_info=True)
|
||||
return None
|
||||
return None
|
||||
|
||||
@@ -3,7 +3,7 @@ import logging
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..utils.example_images_paths import iter_library_roots
|
||||
from ..utils.metadata_manager import MetadataManager
|
||||
@@ -14,6 +14,25 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
CURRENT_NAMING_VERSION = 2 # Increment this when naming conventions change
|
||||
|
||||
|
||||
class _SettingsProxy:
|
||||
def __init__(self):
|
||||
self._manager = None
|
||||
|
||||
def _resolve(self):
|
||||
if self._manager is None:
|
||||
self._manager = get_settings_manager()
|
||||
return self._manager
|
||||
|
||||
def get(self, *args, **kwargs):
|
||||
return self._resolve().get(*args, **kwargs)
|
||||
|
||||
def __getattr__(self, item):
|
||||
return getattr(self._resolve(), item)
|
||||
|
||||
|
||||
settings = _SettingsProxy()
|
||||
|
||||
class ExampleImagesMigration:
|
||||
"""Handles migrations for example images naming conventions"""
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ import re
|
||||
import shutil
|
||||
from typing import Iterable, List, Optional, Tuple
|
||||
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
|
||||
_HEX_PATTERN = re.compile(r"[a-fA-F0-9]{64}")
|
||||
|
||||
@@ -18,7 +18,8 @@ logger = logging.getLogger(__name__)
|
||||
def _get_configured_libraries() -> List[str]:
|
||||
"""Return configured library names if multi-library support is enabled."""
|
||||
|
||||
libraries = settings.get("libraries")
|
||||
settings_manager = get_settings_manager()
|
||||
libraries = settings_manager.get("libraries")
|
||||
if isinstance(libraries, dict) and libraries:
|
||||
return list(libraries.keys())
|
||||
return []
|
||||
@@ -27,7 +28,8 @@ def _get_configured_libraries() -> List[str]:
|
||||
def get_example_images_root() -> str:
|
||||
"""Return the root directory configured for example images."""
|
||||
|
||||
root = settings.get("example_images_path") or ""
|
||||
settings_manager = get_settings_manager()
|
||||
root = settings_manager.get("example_images_path") or ""
|
||||
return os.path.abspath(root) if root else ""
|
||||
|
||||
|
||||
@@ -41,7 +43,8 @@ def uses_library_scoped_folders() -> bool:
|
||||
def sanitize_library_name(library_name: Optional[str]) -> str:
|
||||
"""Return a filesystem safe library name."""
|
||||
|
||||
name = library_name or settings.get_active_library_name() or "default"
|
||||
settings_manager = get_settings_manager()
|
||||
name = library_name or settings_manager.get_active_library_name() or "default"
|
||||
safe_name = re.sub(r"[^A-Za-z0-9_.-]", "_", name)
|
||||
return safe_name or "default"
|
||||
|
||||
@@ -161,11 +164,13 @@ def iter_library_roots() -> Iterable[Tuple[str, str]]:
|
||||
results.append((library, get_library_root(library)))
|
||||
else:
|
||||
# Fall back to the active library to avoid skipping migrations/cleanup
|
||||
active = settings.get_active_library_name() or "default"
|
||||
settings_manager = get_settings_manager()
|
||||
active = settings_manager.get_active_library_name() or "default"
|
||||
results.append((active, get_library_root(active)))
|
||||
return results
|
||||
|
||||
active = settings.get_active_library_name() or "default"
|
||||
settings_manager = get_settings_manager()
|
||||
active = settings_manager.get_active_library_name() or "default"
|
||||
return [(active, root)]
|
||||
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ import string
|
||||
from aiohttp import web
|
||||
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..services.settings_manager import settings
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..utils.example_images_paths import get_model_folder, get_model_relative_path
|
||||
from .example_images_metadata import MetadataUpdater
|
||||
from ..utils.metadata_manager import MetadataManager
|
||||
@@ -318,7 +318,7 @@ class ExampleImagesProcessor:
|
||||
|
||||
try:
|
||||
# Get example images path
|
||||
example_images_path = settings.get('example_images_path')
|
||||
example_images_path = get_settings_manager().get('example_images_path')
|
||||
if not example_images_path:
|
||||
raise ExampleImagesValidationError('No example images path configured')
|
||||
|
||||
@@ -442,7 +442,7 @@ class ExampleImagesProcessor:
|
||||
}, status=400)
|
||||
|
||||
# Get example images path
|
||||
example_images_path = settings.get('example_images_path')
|
||||
example_images_path = get_settings_manager().get('example_images_path')
|
||||
if not example_images_path:
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
@@ -475,15 +475,17 @@ class ExampleImagesProcessor:
|
||||
'error': f"Model with hash {model_hash} not found in cache"
|
||||
}, status=404)
|
||||
|
||||
# Check if model has custom images
|
||||
if not model_data.get('civitai', {}).get('customImages'):
|
||||
await MetadataManager.hydrate_model_data(model_data)
|
||||
civitai_data = model_data.setdefault('civitai', {})
|
||||
custom_images = civitai_data.get('customImages')
|
||||
|
||||
if not isinstance(custom_images, list) or not custom_images:
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
'error': f"Model has no custom images"
|
||||
}, status=404)
|
||||
|
||||
# Find the custom image with matching short_id
|
||||
custom_images = model_data['civitai']['customImages']
|
||||
matching_image = None
|
||||
new_custom_images = []
|
||||
|
||||
@@ -527,17 +529,15 @@ class ExampleImagesProcessor:
|
||||
logger.warning(f"File for custom example with id {short_id} not found, but metadata will still be updated")
|
||||
|
||||
# Update metadata
|
||||
model_data['civitai']['customImages'] = new_custom_images
|
||||
civitai_data['customImages'] = new_custom_images
|
||||
model_data.setdefault('civitai', {})['customImages'] = new_custom_images
|
||||
|
||||
# Save updated metadata to file
|
||||
file_path = model_data.get('file_path')
|
||||
if file_path:
|
||||
try:
|
||||
# Create a copy of model data without 'folder' field
|
||||
model_copy = model_data.copy()
|
||||
model_copy.pop('folder', None)
|
||||
|
||||
# Write metadata to file
|
||||
await MetadataManager.save_metadata(file_path, model_copy)
|
||||
logger.debug(f"Saved updated metadata for {model_data.get('model_name')}")
|
||||
except Exception as e:
|
||||
@@ -551,7 +551,7 @@ class ExampleImagesProcessor:
|
||||
await scanner.update_single_model_cache(file_path, file_path, model_data)
|
||||
|
||||
# Get regular images array (might be None)
|
||||
regular_images = model_data['civitai'].get('images', [])
|
||||
regular_images = civitai_data.get('images', [])
|
||||
|
||||
return web.json_response({
|
||||
'success': True,
|
||||
@@ -568,4 +568,4 @@ class ExampleImagesProcessor:
|
||||
}, status=500)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ from datetime import datetime
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
from typing import Dict, Optional, Type, Union
|
||||
from typing import Any, Dict, Optional, Type, Union
|
||||
|
||||
from .models import BaseModelMetadata, LoraMetadata
|
||||
from .file_utils import normalize_path, find_preview_file, calculate_sha256
|
||||
@@ -53,6 +53,70 @@ class MetadataManager:
|
||||
error_type = "Invalid JSON" if isinstance(e, json.JSONDecodeError) else "Parse error"
|
||||
logger.error(f"{error_type} in metadata file: {metadata_path}. Error: {str(e)}. Skipping model to preserve existing data.")
|
||||
return None, True # should_skip = True
|
||||
|
||||
@staticmethod
|
||||
async def load_metadata_payload(file_path: str) -> Dict:
|
||||
"""
|
||||
Load metadata and return it as a dictionary, including any unknown fields.
|
||||
Falls back to reading the raw JSON file if parsing into a model class fails.
|
||||
"""
|
||||
|
||||
payload: Dict = {}
|
||||
metadata_obj, should_skip = await MetadataManager.load_metadata(file_path)
|
||||
|
||||
if metadata_obj:
|
||||
payload = metadata_obj.to_dict()
|
||||
unknown_fields = getattr(metadata_obj, "_unknown_fields", None)
|
||||
if isinstance(unknown_fields, dict):
|
||||
payload.update(unknown_fields)
|
||||
else:
|
||||
if not should_skip:
|
||||
metadata_path = (
|
||||
file_path
|
||||
if file_path.endswith(".metadata.json")
|
||||
else f"{os.path.splitext(file_path)[0]}.metadata.json"
|
||||
)
|
||||
if os.path.exists(metadata_path):
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as handle:
|
||||
raw = json.load(handle)
|
||||
if isinstance(raw, dict):
|
||||
payload = raw
|
||||
except json.JSONDecodeError:
|
||||
logger.warning(
|
||||
"Failed to parse metadata file %s while loading payload",
|
||||
metadata_path,
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.warning("Failed to read metadata file %s: %s", metadata_path, exc)
|
||||
|
||||
if not isinstance(payload, dict):
|
||||
payload = {}
|
||||
|
||||
if file_path:
|
||||
payload.setdefault("file_path", normalize_path(file_path))
|
||||
|
||||
return payload
|
||||
|
||||
@staticmethod
|
||||
async def hydrate_model_data(model_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Replace the provided model data with the authoritative payload from disk.
|
||||
Preserves the cached folder entry if present.
|
||||
"""
|
||||
|
||||
file_path = model_data.get("file_path")
|
||||
if not file_path:
|
||||
return model_data
|
||||
|
||||
folder = model_data.get("folder")
|
||||
payload = await MetadataManager.load_metadata_payload(file_path)
|
||||
if folder is not None:
|
||||
payload["folder"] = folder
|
||||
|
||||
model_data.clear()
|
||||
model_data.update(payload)
|
||||
return model_data
|
||||
|
||||
@staticmethod
|
||||
async def save_metadata(path: str, metadata: Union[BaseModelMetadata, Dict]) -> bool:
|
||||
|
||||
@@ -18,18 +18,22 @@ class BaseModelMetadata:
|
||||
preview_nsfw_level: int = 0 # NSFW level of the preview image
|
||||
notes: str = "" # Additional notes
|
||||
from_civitai: bool = True # Whether from Civitai
|
||||
civitai: Optional[Dict] = None # Civitai API data if available
|
||||
civitai: Dict[str, Any] = field(default_factory=dict) # Civitai API data if available
|
||||
tags: List[str] = None # Model tags
|
||||
modelDescription: str = "" # Full model description
|
||||
civitai_deleted: bool = False # Whether deleted from Civitai
|
||||
favorite: bool = False # Whether the model is a favorite
|
||||
exclude: bool = False # Whether to exclude this model from the cache
|
||||
db_checked: bool = False # Whether checked in archive DB
|
||||
metadata_source: Optional[str] = None # Last provider that supplied metadata
|
||||
last_checked_at: float = 0 # Last checked timestamp
|
||||
_unknown_fields: Dict[str, Any] = field(default_factory=dict, repr=False, compare=False) # Store unknown fields
|
||||
|
||||
def __post_init__(self):
|
||||
# Initialize empty lists to avoid mutable default parameter issue
|
||||
if self.civitai is None:
|
||||
self.civitai = {}
|
||||
|
||||
if self.tags is None:
|
||||
self.tags = []
|
||||
|
||||
|
||||
@@ -65,6 +65,12 @@ def ensure_settings_file(logger: Optional[logging.Logger] = None) -> str:
|
||||
|
||||
logger = logger or _LOGGER
|
||||
target_path = get_settings_file_path(create_dir=True)
|
||||
preferred_dir = user_config_dir(APP_NAME, appauthor=False)
|
||||
preferred_path = os.path.join(preferred_dir, "settings.json")
|
||||
|
||||
if os.path.abspath(target_path) != os.path.abspath(preferred_path):
|
||||
os.makedirs(preferred_dir, exist_ok=True)
|
||||
target_path = preferred_path
|
||||
legacy_path = get_legacy_settings_path()
|
||||
|
||||
if os.path.exists(legacy_path) and not os.path.exists(target_path):
|
||||
|
||||
104
py/utils/tag_priorities.py
Normal file
104
py/utils/tag_priorities.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""Helpers for parsing and resolving priority tag configurations."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Iterable, List, Optional, Sequence, Set
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class PriorityTagEntry:
|
||||
"""A parsed priority tag configuration entry."""
|
||||
|
||||
canonical: str
|
||||
aliases: Set[str]
|
||||
|
||||
@property
|
||||
def normalized_aliases(self) -> Set[str]:
|
||||
return {alias.lower() for alias in self.aliases}
|
||||
|
||||
|
||||
def _normalize_alias(alias: str) -> str:
|
||||
return alias.strip()
|
||||
|
||||
|
||||
def parse_priority_tag_string(config: str | None) -> List[PriorityTagEntry]:
|
||||
"""Parse the user-facing priority tag string into structured entries."""
|
||||
|
||||
if not config:
|
||||
return []
|
||||
|
||||
entries: List[PriorityTagEntry] = []
|
||||
seen_canonicals: Set[str] = set()
|
||||
|
||||
for raw_entry in _split_priority_entries(config):
|
||||
canonical, aliases = _parse_priority_entry(raw_entry)
|
||||
if not canonical:
|
||||
continue
|
||||
|
||||
normalized_canonical = canonical.lower()
|
||||
if normalized_canonical in seen_canonicals:
|
||||
# Skip duplicate canonicals while preserving first occurrence priority
|
||||
continue
|
||||
seen_canonicals.add(normalized_canonical)
|
||||
|
||||
alias_set = {canonical, *aliases}
|
||||
cleaned_aliases = {_normalize_alias(alias) for alias in alias_set if _normalize_alias(alias)}
|
||||
if not cleaned_aliases:
|
||||
continue
|
||||
|
||||
entries.append(PriorityTagEntry(canonical=canonical, aliases=cleaned_aliases))
|
||||
|
||||
return entries
|
||||
|
||||
|
||||
def _split_priority_entries(config: str) -> List[str]:
|
||||
# Split on commas while respecting that users may add new lines for readability
|
||||
parts = []
|
||||
for chunk in config.split('\n'):
|
||||
parts.extend(chunk.split(','))
|
||||
return [part.strip() for part in parts if part.strip()]
|
||||
|
||||
|
||||
def _parse_priority_entry(entry: str) -> tuple[str, Set[str]]:
|
||||
if '(' in entry and entry.endswith(')'):
|
||||
canonical, raw_aliases = entry.split('(', 1)
|
||||
canonical = canonical.strip()
|
||||
alias_section = raw_aliases[:-1] # drop trailing ')'
|
||||
aliases = {alias.strip() for alias in alias_section.split('|') if alias.strip()}
|
||||
return canonical, aliases
|
||||
|
||||
if '(' in entry and not entry.endswith(')'):
|
||||
# Malformed entry; treat as literal canonical to avoid surprises
|
||||
entry = entry.replace('(', '').replace(')', '')
|
||||
|
||||
canonical = entry.strip()
|
||||
return canonical, set()
|
||||
|
||||
|
||||
def resolve_priority_tag(
|
||||
tags: Sequence[str] | Iterable[str],
|
||||
entries: Sequence[PriorityTagEntry],
|
||||
) -> Optional[str]:
|
||||
"""Resolve the first matching canonical priority tag for the provided tags."""
|
||||
|
||||
tag_lookup: Dict[str, str] = {}
|
||||
for tag in tags:
|
||||
if not isinstance(tag, str):
|
||||
continue
|
||||
normalized = tag.lower()
|
||||
if normalized not in tag_lookup:
|
||||
tag_lookup[normalized] = tag
|
||||
|
||||
for entry in entries:
|
||||
for alias in entry.normalized_aliases:
|
||||
if alias in tag_lookup:
|
||||
return entry.canonical
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def collect_canonical_tags(entries: Iterable[PriorityTagEntry]) -> List[str]:
|
||||
"""Return the ordered list of canonical tags from the parsed entries."""
|
||||
|
||||
return [entry.canonical for entry in entries]
|
||||
@@ -3,8 +3,7 @@ import os
|
||||
from typing import Dict
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..config import config
|
||||
from ..services.settings_manager import settings
|
||||
from .constants import CIVITAI_MODEL_TAGS
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
import asyncio
|
||||
|
||||
def get_lora_info(lora_name):
|
||||
@@ -143,7 +142,8 @@ def calculate_relative_path_for_model(model_data: Dict, model_type: str = 'lora'
|
||||
Relative path string (empty string for flat structure)
|
||||
"""
|
||||
# Get path template from settings for specific model type
|
||||
path_template = settings.get_download_path_template(model_type)
|
||||
settings_manager = get_settings_manager()
|
||||
path_template = settings_manager.get_download_path_template(model_type)
|
||||
|
||||
# If template is empty, return empty path (flat structure)
|
||||
if not path_template:
|
||||
@@ -166,19 +166,10 @@ def calculate_relative_path_for_model(model_data: Dict, model_type: str = 'lora'
|
||||
model_tags = model_data.get('tags', [])
|
||||
|
||||
# Apply mapping if available
|
||||
base_model_mappings = settings.get('base_model_path_mappings', {})
|
||||
base_model_mappings = settings_manager.get('base_model_path_mappings', {})
|
||||
mapped_base_model = base_model_mappings.get(base_model, base_model)
|
||||
|
||||
# Find the first Civitai model tag that exists in model_tags
|
||||
first_tag = ''
|
||||
for civitai_tag in CIVITAI_MODEL_TAGS:
|
||||
if civitai_tag in model_tags:
|
||||
first_tag = civitai_tag
|
||||
break
|
||||
|
||||
# If no Civitai model tag found, fallback to first tag
|
||||
if not first_tag and model_tags:
|
||||
first_tag = model_tags[0]
|
||||
first_tag = settings_manager.resolve_priority_tag_for_model(model_tags, model_type)
|
||||
|
||||
if not first_tag:
|
||||
first_tag = 'no tags' # Default if no tags available
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[project]
|
||||
name = "comfyui-lora-manager"
|
||||
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
|
||||
version = "0.9.6"
|
||||
version = "0.9.8"
|
||||
license = {file = "LICENSE"}
|
||||
dependencies = [
|
||||
"aiohttp",
|
||||
|
||||
134
refs/civarc_api_model_data.json
Normal file
134
refs/civarc_api_model_data.json
Normal file
@@ -0,0 +1,134 @@
|
||||
{
|
||||
"id": 1746460,
|
||||
"name": "Mixplin Style [Illustrious]",
|
||||
"type": "LORA",
|
||||
"description": "description",
|
||||
"username": "Ty_Lee",
|
||||
"downloadCount": 4207,
|
||||
"favoriteCount": 0,
|
||||
"commentCount": 8,
|
||||
"ratingCount": 0,
|
||||
"rating": 0,
|
||||
"is_nsfw": true,
|
||||
"nsfw_level": 31,
|
||||
"createdAt": "2025-07-06T01:51:42.859Z",
|
||||
"updatedAt": "2025-10-10T23:15:26.714Z",
|
||||
"deletedAt": null,
|
||||
"tags": [
|
||||
"art",
|
||||
"style",
|
||||
"artist style",
|
||||
"styles",
|
||||
"mixplin",
|
||||
"artiststyle"
|
||||
],
|
||||
"creator_id": "Ty_Lee",
|
||||
"creator_username": "Ty_Lee",
|
||||
"creator_name": "Ty_Lee",
|
||||
"creator_url": "/users/Ty_Lee",
|
||||
"versions": [
|
||||
{
|
||||
"id": 2042594,
|
||||
"name": "v2.0",
|
||||
"href": "/models/1746460?modelVersionId=2042594"
|
||||
},
|
||||
{
|
||||
"id": 1976567,
|
||||
"name": "v1.0",
|
||||
"href": "/models/1746460?modelVersionId=1976567"
|
||||
}
|
||||
],
|
||||
"version": {
|
||||
"id": 1976567,
|
||||
"modelId": 1746460,
|
||||
"name": "v1.0",
|
||||
"baseModel": "Illustrious",
|
||||
"baseModelType": "Standard",
|
||||
"description": null,
|
||||
"downloadCount": 437,
|
||||
"ratingCount": 0,
|
||||
"rating": 0,
|
||||
"is_nsfw": true,
|
||||
"nsfw_level": 31,
|
||||
"createdAt": "2025-07-05T10:17:28.716Z",
|
||||
"updatedAt": "2025-10-10T23:15:26.756Z",
|
||||
"deletedAt": null,
|
||||
"files": [
|
||||
{
|
||||
"id": 1874043,
|
||||
"name": "mxpln-illustrious-ty_lee.safetensors",
|
||||
"type": "Model",
|
||||
"sizeKB": 223124.37109375,
|
||||
"downloadUrl": "https://civitai.com/api/download/models/1976567",
|
||||
"modelId": 1746460,
|
||||
"modelName": "Mixplin Style [Illustrious]",
|
||||
"modelVersionId": 1976567,
|
||||
"is_nsfw": true,
|
||||
"nsfw_level": 31,
|
||||
"sha256": "e2b7a280d6539556f23f380b3f71e4e22bc4524445c4c96526e117c6005c6ad3",
|
||||
"createdAt": "2025-07-05T10:17:28.716Z",
|
||||
"updatedAt": "2025-10-10T23:15:26.766Z",
|
||||
"is_primary": false,
|
||||
"mirrors": [
|
||||
{
|
||||
"filename": "mxpln-illustrious-ty_lee.safetensors",
|
||||
"url": "https://civitai.com/api/download/models/1976567",
|
||||
"source": "civitai",
|
||||
"model_id": 1746460,
|
||||
"model_version_id": 1976567,
|
||||
"deletedAt": null,
|
||||
"is_gated": false,
|
||||
"is_paid": false
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"images": [
|
||||
{
|
||||
"id": 86403595,
|
||||
"url": "https://img.genur.art/sig/width:450/quality:85/aHR0cHM6Ly9jLmdlbnVyLmFydC9hNmE3Njc2YS0wMWQ3LTQ1YzAtOWEzYS1mNWJiYTU4MDNiMDE=",
|
||||
"nsfwLevel": 1,
|
||||
"width": 1560,
|
||||
"height": 2280,
|
||||
"hash": "U7G8Zp0w02%IA6%N00-;D]-W~VNG0nMw-.IV",
|
||||
"type": "image",
|
||||
"minor": false,
|
||||
"poi": false,
|
||||
"hasMeta": true,
|
||||
"hasPositivePrompt": true,
|
||||
"onSite": false,
|
||||
"remixOfId": null,
|
||||
"image_url": "https://img.genur.art/sig/width:450/quality:85/aHR0cHM6Ly9jLmdlbnVyLmFydC9hNmE3Njc2YS0wMWQ3LTQ1YzAtOWEzYS1mNWJiYTU4MDNiMDE=",
|
||||
"link": "https://genur.art/posts/86403595"
|
||||
}
|
||||
],
|
||||
"trigger": [
|
||||
"mxpln"
|
||||
],
|
||||
"allow_download": true,
|
||||
"download_url": "/api/download/models/1976567",
|
||||
"platform_url": "https://civitai.com/models/1746460?modelVersionId=1976567",
|
||||
"civitai_model_id": 1746460,
|
||||
"civitai_model_version_id": 1976567,
|
||||
"href": "/models/1746460?modelVersionId=1976567",
|
||||
"mirrors": [
|
||||
{
|
||||
"platform": "tensorart",
|
||||
"href": "/tensorart/models/904473536033245448/versions/904473536033245448",
|
||||
"platform_url": "https://tensor.art/models/904473536033245448",
|
||||
"name": "Mixplin Style MXP",
|
||||
"version_name": "Mixplin",
|
||||
"id": "904473536033245448",
|
||||
"version_id": "904473536033245448"
|
||||
}
|
||||
]
|
||||
},
|
||||
"platform": "civitai",
|
||||
"platform_name": "CivitAI",
|
||||
"meta": {
|
||||
"title": "Mixplin Style [Illustrious] - v1.0 - CivitAI Archive",
|
||||
"description": "Mixplin Style [Illustrious] v1.0 is a Illustrious LORA AI model created by Ty_Lee for generating images of art, style, artist style, styles, mixplin, artiststyle",
|
||||
"image": "https://img.genur.art/sig/width:450/quality:85/aHR0cHM6Ly9jLmdlbnVyLmFydC9hNmE3Njc2YS0wMWQ3LTQ1YzAtOWEzYS1mNWJiYTU4MDNiMDE=",
|
||||
"canonical": "https://civarchive.com/models/1746460?modelVersionId=1976567"
|
||||
}
|
||||
}
|
||||
38
refs/target_version.json
Normal file
38
refs/target_version.json
Normal file
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"id": 2269146,
|
||||
"modelId": 2004760,
|
||||
"name": "v1.0 Illustrious",
|
||||
"nsfwLevel": 1,
|
||||
"trainedWords": ["PencilSketchDaal"],
|
||||
"baseModel": "Illustrious",
|
||||
"description": "<p>Illustrious. Your pencil may vary with your checkpoint. </p>",
|
||||
"model": {
|
||||
"name": "Pencil Sketch Anime",
|
||||
"type": "LORA",
|
||||
"nsfw": false,
|
||||
"description": "description",
|
||||
"tags": ["style"]
|
||||
},
|
||||
"files": [
|
||||
{
|
||||
"id": 2161260,
|
||||
"sizeKB": 223106.37890625,
|
||||
"name": "Pencil-Sketch-Illustrious.safetensors",
|
||||
"type": "Model",
|
||||
"hashes": {
|
||||
"SHA256": "2C70479CD673B0FE056EAF4FD97C7F33A39F14853805431AC9AB84226ECE3B82"
|
||||
},
|
||||
"primary": true,
|
||||
"downloadUrl": "https://civitai.com/api/download/models/2269146",
|
||||
"mirrors": {}
|
||||
}
|
||||
],
|
||||
"images": [
|
||||
{},
|
||||
{}
|
||||
],
|
||||
"creator": {
|
||||
"username": "Daalis",
|
||||
"image": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/eb245b49-edc8-4ed6-ad7b-6d61eb8c51de/width=96/Daalis.jpeg"
|
||||
}
|
||||
}
|
||||
@@ -103,6 +103,23 @@
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.download-transfer-stats {
|
||||
margin-top: var(--space-2);
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-color);
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-1);
|
||||
}
|
||||
|
||||
.download-transfer-stats .download-transfer-bytes,
|
||||
.download-transfer-stats .download-transfer-speed {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--space-1);
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
@@ -114,4 +131,4 @@
|
||||
.current-item-bar {
|
||||
transition: none;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,34 +35,38 @@
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.settings-open-location-button {
|
||||
.settings-action-link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
border: none;
|
||||
border-radius: var(--border-radius-xs);
|
||||
border: 1px solid transparent;
|
||||
background: none;
|
||||
color: var(--text-color);
|
||||
opacity: 0.6;
|
||||
cursor: pointer;
|
||||
border-radius: var(--border-radius-xs);
|
||||
transition: opacity 0.2s ease, background-color 0.2s ease;
|
||||
text-decoration: none;
|
||||
line-height: 1;
|
||||
transition: opacity 0.2s ease, background-color 0.2s ease, color 0.2s ease, border-color 0.2s ease;
|
||||
}
|
||||
|
||||
.settings-open-location-button:hover,
|
||||
.settings-open-location-button:focus-visible {
|
||||
.settings-action-link:hover,
|
||||
.settings-action-link:focus-visible {
|
||||
opacity: 1;
|
||||
color: var(--lora-accent);
|
||||
background-color: rgba(var(--border-color-rgb, 148, 163, 184), 0.2);
|
||||
border-color: rgba(var(--border-color-rgb, 148, 163, 184), 0.4);
|
||||
outline: none;
|
||||
}
|
||||
|
||||
.settings-open-location-button i {
|
||||
.settings-action-link i {
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
.settings-open-location-button:focus-visible {
|
||||
box-shadow: 0 0 0 2px rgba(var(--border-color-rgb, 148, 163, 184), 0.6);
|
||||
.settings-action-link:focus-visible {
|
||||
box-shadow: 0 0 0 2px rgba(var(--lora-accent-rgb, 79, 70, 229), 0.2);
|
||||
}
|
||||
|
||||
/* Settings Links */
|
||||
@@ -204,6 +208,141 @@
|
||||
width: 100%; /* Full width */
|
||||
}
|
||||
|
||||
.settings-help-text {
|
||||
font-size: 0.9em;
|
||||
color: var(--text-color);
|
||||
opacity: 0.8;
|
||||
margin-bottom: var(--space-2);
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.settings-help-text.subtle {
|
||||
font-size: 0.85em;
|
||||
opacity: 0.7;
|
||||
margin-top: var(--space-1);
|
||||
}
|
||||
|
||||
.priority-tags-input {
|
||||
width: 97%;
|
||||
min-height: 72px;
|
||||
padding: 8px;
|
||||
border-radius: var(--border-radius-xs);
|
||||
border: 1px solid var(--border-color);
|
||||
background-color: var(--lora-surface);
|
||||
color: var(--text-color);
|
||||
resize: vertical;
|
||||
}
|
||||
|
||||
.priority-tags-input:focus {
|
||||
border-color: var(--lora-accent);
|
||||
outline: none;
|
||||
box-shadow: 0 0 0 2px rgba(var(--lora-accent-rgb, 79, 70, 229), 0.1);
|
||||
}
|
||||
|
||||
.priority-tags-item {
|
||||
gap: var(--space-2);
|
||||
}
|
||||
|
||||
.priority-tags-header {
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.priority-tags-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
|
||||
.priority-tags-example {
|
||||
font-size: 0.85em;
|
||||
opacity: 0.8;
|
||||
margin-top: var(--space-1);
|
||||
}
|
||||
|
||||
.priority-tags-example code {
|
||||
font-family: var(--code-font, ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace);
|
||||
background-color: rgba(var(--lora-accent-rgb, 79, 70, 229), 0.12);
|
||||
padding: 2px 6px;
|
||||
border-radius: var(--border-radius-xs);
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
.priority-tags-tabs {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.priority-tags-tab-input {
|
||||
position: absolute;
|
||||
opacity: 0;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.priority-tags-tablist {
|
||||
display: flex;
|
||||
gap: var(--space-1);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
padding-bottom: var(--space-1);
|
||||
}
|
||||
|
||||
.priority-tags-tab-label {
|
||||
flex: 1;
|
||||
text-align: center;
|
||||
padding: var(--space-1) var(--space-2);
|
||||
border: none;
|
||||
border-bottom: 2px solid transparent;
|
||||
color: var(--text-color);
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.priority-tags-tab-label:hover,
|
||||
.priority-tags-tab-label:focus {
|
||||
opacity: 1;
|
||||
color: var(--lora-accent);
|
||||
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.05);
|
||||
}
|
||||
|
||||
.priority-tags-panels {
|
||||
margin-top: var(--space-2);
|
||||
}
|
||||
|
||||
.priority-tags-panel {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#priority-tags-tab-lora:checked ~ .priority-tags-tablist label[for="priority-tags-tab-lora"],
|
||||
#priority-tags-tab-checkpoint:checked ~ .priority-tags-tablist label[for="priority-tags-tab-checkpoint"],
|
||||
#priority-tags-tab-embedding:checked ~ .priority-tags-tablist label[for="priority-tags-tab-embedding"] {
|
||||
border-bottom-color: var(--lora-accent);
|
||||
color: var(--lora-accent);
|
||||
opacity: 1;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
#priority-tags-tab-lora:checked ~ .priority-tags-panels #priority-tags-panel-lora,
|
||||
#priority-tags-tab-checkpoint:checked ~ .priority-tags-panels #priority-tags-panel-checkpoint,
|
||||
#priority-tags-tab-embedding:checked ~ .priority-tags-panels #priority-tags-panel-embedding {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.priority-tags-input.settings-input-error {
|
||||
border-color: var(--danger-color, #dc2626);
|
||||
box-shadow: 0 0 0 2px rgba(220, 38, 38, 0.12);
|
||||
}
|
||||
|
||||
.settings-input-error-message {
|
||||
font-size: 0.8em;
|
||||
color: var(--danger-color, #dc2626);
|
||||
display: none;
|
||||
}
|
||||
|
||||
.metadata-suggestions-loading {
|
||||
font-size: 0.85em;
|
||||
opacity: 0.7;
|
||||
padding: 6px 0;
|
||||
}
|
||||
|
||||
/* Settings Styles */
|
||||
.settings-section {
|
||||
margin-top: var(--space-3);
|
||||
@@ -670,4 +809,4 @@ input:checked + .toggle-slider:before {
|
||||
padding-top: var(--space-2);
|
||||
margin-top: var(--space-2);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -92,6 +92,39 @@
|
||||
border-radius: var(--border-radius-xs);
|
||||
padding: 4px 8px;
|
||||
position: relative;
|
||||
cursor: grab;
|
||||
transition: transform 0.18s ease;
|
||||
}
|
||||
|
||||
.metadata-item:active {
|
||||
cursor: grabbing;
|
||||
}
|
||||
|
||||
.metadata-item-dragging {
|
||||
box-shadow: 0 10px 24px rgba(0, 0, 0, 0.25);
|
||||
cursor: grabbing;
|
||||
opacity: 0.95;
|
||||
transition: none;
|
||||
}
|
||||
|
||||
.metadata-item-placeholder {
|
||||
border: 1px dashed var(--lora-accent);
|
||||
border-radius: var(--border-radius-xs);
|
||||
background: rgba(255, 255, 255, 0.1);
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.metadata-items-sorting .metadata-item {
|
||||
transition: transform 0.18s ease;
|
||||
}
|
||||
|
||||
body.metadata-drag-active {
|
||||
user-select: none;
|
||||
cursor: grabbing;
|
||||
}
|
||||
|
||||
body.metadata-drag-active * {
|
||||
cursor: grabbing !important;
|
||||
}
|
||||
|
||||
.metadata-item-content {
|
||||
|
||||
@@ -184,6 +184,19 @@
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.sidebar-tree-node-content.drop-target,
|
||||
.sidebar-node-content.drop-target {
|
||||
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.15);
|
||||
color: var(--lora-accent);
|
||||
border-left-color: var(--lora-accent);
|
||||
}
|
||||
|
||||
.sidebar-tree-node-content.drop-target .sidebar-tree-folder-icon,
|
||||
.sidebar-node-content.drop-target .sidebar-folder-icon {
|
||||
color: var(--lora-accent);
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.sidebar-tree-expand-icon {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
|
||||
@@ -569,9 +569,15 @@ export class BaseModelApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
async fetchCivitaiVersions(modelId) {
|
||||
async fetchCivitaiVersions(modelId, source = null) {
|
||||
try {
|
||||
const response = await fetch(`${this.apiConfig.endpoints.civitaiVersions}/${modelId}`);
|
||||
let requestUrl = `${this.apiConfig.endpoints.civitaiVersions}/${modelId}`;
|
||||
if (source) {
|
||||
const params = new URLSearchParams({ source });
|
||||
requestUrl = `${requestUrl}?${params.toString()}`;
|
||||
}
|
||||
|
||||
const response = await fetch(requestUrl);
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json().catch(() => ({}));
|
||||
if (errorData && errorData.error && errorData.error.includes('Model type mismatch')) {
|
||||
@@ -639,7 +645,7 @@ export class BaseModelApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
async downloadModel(modelId, versionId, modelRoot, relativePath, useDefaultPaths = false, downloadId) {
|
||||
async downloadModel(modelId, versionId, modelRoot, relativePath, useDefaultPaths = false, downloadId, source = null) {
|
||||
try {
|
||||
const response = await fetch(DOWNLOAD_ENDPOINTS.download, {
|
||||
method: 'POST',
|
||||
@@ -650,7 +656,8 @@ export class BaseModelApiClient {
|
||||
model_root: modelRoot,
|
||||
relative_path: relativePath,
|
||||
use_default_paths: useDefaultPaths,
|
||||
download_id: downloadId
|
||||
download_id: downloadId,
|
||||
...(source ? { source } : {})
|
||||
})
|
||||
});
|
||||
|
||||
|
||||
@@ -1,17 +1,17 @@
|
||||
import { LoraApiClient } from './loraApi.js';
|
||||
import { CheckpointApiClient } from './checkpointApi.js';
|
||||
import { EmbeddingApiClient } from './embeddingApi.js';
|
||||
import { MODEL_TYPES } from './apiConfig.js';
|
||||
import { MODEL_TYPES, isValidModelType } from './apiConfig.js';
|
||||
import { state } from '../state/index.js';
|
||||
|
||||
export function createModelApiClient(modelType) {
|
||||
switch (modelType) {
|
||||
case MODEL_TYPES.LORA:
|
||||
return new LoraApiClient();
|
||||
return new LoraApiClient(MODEL_TYPES.LORA);
|
||||
case MODEL_TYPES.CHECKPOINT:
|
||||
return new CheckpointApiClient();
|
||||
return new CheckpointApiClient(MODEL_TYPES.CHECKPOINT);
|
||||
case MODEL_TYPES.EMBEDDING:
|
||||
return new EmbeddingApiClient();
|
||||
return new EmbeddingApiClient(MODEL_TYPES.EMBEDDING);
|
||||
default:
|
||||
throw new Error(`Unsupported model type: ${modelType}`);
|
||||
}
|
||||
@@ -20,7 +20,13 @@ export function createModelApiClient(modelType) {
|
||||
let _singletonClients = new Map();
|
||||
|
||||
export function getModelApiClient(modelType = null) {
|
||||
const targetType = modelType || state.currentPageType;
|
||||
let targetType = modelType;
|
||||
|
||||
if (!isValidModelType(targetType)) {
|
||||
targetType = isValidModelType(state.currentPageType)
|
||||
? state.currentPageType
|
||||
: MODEL_TYPES.LORA;
|
||||
}
|
||||
|
||||
if (!_singletonClients.has(targetType)) {
|
||||
_singletonClients.set(targetType, createModelApiClient(targetType));
|
||||
@@ -32,4 +38,4 @@ export function getModelApiClient(modelType = null) {
|
||||
export function resetAndReload(updateFolders = false) {
|
||||
const client = getModelApiClient();
|
||||
return client.loadMoreWithVirtualScroll(true, updateFolders);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,9 @@
|
||||
import { getStorageItem, setStorageItem } from '../utils/storageHelpers.js';
|
||||
import { getModelApiClient } from '../api/modelApiFactory.js';
|
||||
import { translate } from '../utils/i18nHelpers.js';
|
||||
import { state } from '../state/index.js';
|
||||
import { bulkManager } from '../managers/BulkManager.js';
|
||||
import { showToast } from '../utils/uiHelpers.js';
|
||||
|
||||
export class SidebarManager {
|
||||
constructor() {
|
||||
@@ -21,7 +24,14 @@ export class SidebarManager {
|
||||
this.isInitialized = false;
|
||||
this.displayMode = 'tree'; // 'tree' or 'list'
|
||||
this.foldersList = [];
|
||||
|
||||
this.recursiveSearchEnabled = true;
|
||||
this.draggedFilePaths = null;
|
||||
this.draggedRootPath = null;
|
||||
this.draggedFromBulk = false;
|
||||
this.dragHandlersInitialized = false;
|
||||
this.folderTreeElement = null;
|
||||
this.currentDropTarget = null;
|
||||
|
||||
// Bind methods
|
||||
this.handleTreeClick = this.handleTreeClick.bind(this);
|
||||
this.handleBreadcrumbClick = this.handleBreadcrumbClick.bind(this);
|
||||
@@ -36,6 +46,13 @@ export class SidebarManager {
|
||||
this.updateContainerMargin = this.updateContainerMargin.bind(this);
|
||||
this.handleDisplayModeToggle = this.handleDisplayModeToggle.bind(this);
|
||||
this.handleFolderListClick = this.handleFolderListClick.bind(this);
|
||||
this.handleRecursiveToggle = this.handleRecursiveToggle.bind(this);
|
||||
this.handleCardDragStart = this.handleCardDragStart.bind(this);
|
||||
this.handleCardDragEnd = this.handleCardDragEnd.bind(this);
|
||||
this.handleFolderDragEnter = this.handleFolderDragEnter.bind(this);
|
||||
this.handleFolderDragOver = this.handleFolderDragOver.bind(this);
|
||||
this.handleFolderDragLeave = this.handleFolderDragLeave.bind(this);
|
||||
this.handleFolderDrop = this.handleFolderDrop.bind(this);
|
||||
}
|
||||
|
||||
async initialize(pageControls) {
|
||||
@@ -52,6 +69,7 @@ export class SidebarManager {
|
||||
this.setInitialSidebarState();
|
||||
|
||||
this.setupEventHandlers();
|
||||
this.initializeDragAndDrop();
|
||||
this.updateSidebarTitle();
|
||||
this.restoreSidebarState();
|
||||
await this.loadFolderTree();
|
||||
@@ -78,7 +96,22 @@ export class SidebarManager {
|
||||
|
||||
// Clean up event handlers
|
||||
this.removeEventHandlers();
|
||||
|
||||
|
||||
this.clearAllDropHighlights();
|
||||
if (this.dragHandlersInitialized) {
|
||||
document.removeEventListener('dragstart', this.handleCardDragStart);
|
||||
document.removeEventListener('dragend', this.handleCardDragEnd);
|
||||
this.dragHandlersInitialized = false;
|
||||
}
|
||||
if (this.folderTreeElement) {
|
||||
this.folderTreeElement.removeEventListener('dragenter', this.handleFolderDragEnter);
|
||||
this.folderTreeElement.removeEventListener('dragover', this.handleFolderDragOver);
|
||||
this.folderTreeElement.removeEventListener('dragleave', this.handleFolderDragLeave);
|
||||
this.folderTreeElement.removeEventListener('drop', this.handleFolderDrop);
|
||||
this.folderTreeElement = null;
|
||||
}
|
||||
this.resetDragState();
|
||||
|
||||
// Reset state
|
||||
this.pageControls = null;
|
||||
this.pageType = null;
|
||||
@@ -89,6 +122,7 @@ export class SidebarManager {
|
||||
this.isHovering = false;
|
||||
this.apiClient = null;
|
||||
this.isInitialized = false;
|
||||
this.recursiveSearchEnabled = true;
|
||||
|
||||
// Reset container margin
|
||||
const container = document.querySelector('.container');
|
||||
@@ -111,6 +145,7 @@ export class SidebarManager {
|
||||
const sidebar = document.getElementById('folderSidebar');
|
||||
const hoverArea = document.getElementById('sidebarHoverArea');
|
||||
const displayModeToggleBtn = document.getElementById('sidebarDisplayModeToggle');
|
||||
const recursiveToggleBtn = document.getElementById('sidebarRecursiveToggle');
|
||||
|
||||
if (pinToggleBtn) {
|
||||
pinToggleBtn.removeEventListener('click', this.handlePinToggle);
|
||||
@@ -145,6 +180,274 @@ export class SidebarManager {
|
||||
if (displayModeToggleBtn) {
|
||||
displayModeToggleBtn.removeEventListener('click', this.handleDisplayModeToggle);
|
||||
}
|
||||
if (recursiveToggleBtn) {
|
||||
recursiveToggleBtn.removeEventListener('click', this.handleRecursiveToggle);
|
||||
}
|
||||
}
|
||||
|
||||
initializeDragAndDrop() {
|
||||
if (!this.dragHandlersInitialized) {
|
||||
document.addEventListener('dragstart', this.handleCardDragStart);
|
||||
document.addEventListener('dragend', this.handleCardDragEnd);
|
||||
this.dragHandlersInitialized = true;
|
||||
}
|
||||
|
||||
const folderTree = document.getElementById('sidebarFolderTree');
|
||||
if (folderTree && this.folderTreeElement !== folderTree) {
|
||||
if (this.folderTreeElement) {
|
||||
this.folderTreeElement.removeEventListener('dragenter', this.handleFolderDragEnter);
|
||||
this.folderTreeElement.removeEventListener('dragover', this.handleFolderDragOver);
|
||||
this.folderTreeElement.removeEventListener('dragleave', this.handleFolderDragLeave);
|
||||
this.folderTreeElement.removeEventListener('drop', this.handleFolderDrop);
|
||||
}
|
||||
|
||||
folderTree.addEventListener('dragenter', this.handleFolderDragEnter);
|
||||
folderTree.addEventListener('dragover', this.handleFolderDragOver);
|
||||
folderTree.addEventListener('dragleave', this.handleFolderDragLeave);
|
||||
folderTree.addEventListener('drop', this.handleFolderDrop);
|
||||
|
||||
this.folderTreeElement = folderTree;
|
||||
}
|
||||
}
|
||||
|
||||
handleCardDragStart(event) {
|
||||
const card = event.target.closest('.model-card');
|
||||
if (!card) return;
|
||||
|
||||
const filePath = card.dataset.filepath;
|
||||
if (!filePath) return;
|
||||
|
||||
const selectedSet = state.selectedModels instanceof Set
|
||||
? state.selectedModels
|
||||
: new Set(state.selectedModels || []);
|
||||
const cardIsSelected = card.classList.contains('selected');
|
||||
const usingBulkSelection = Boolean(state.bulkMode && cardIsSelected && selectedSet && selectedSet.size > 0);
|
||||
|
||||
const paths = usingBulkSelection ? Array.from(selectedSet) : [filePath];
|
||||
const filePaths = Array.from(new Set(paths.filter(Boolean)));
|
||||
|
||||
if (filePaths.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.draggedFilePaths = filePaths;
|
||||
this.draggedRootPath = this.getRootPathFromCard(card);
|
||||
this.draggedFromBulk = usingBulkSelection;
|
||||
|
||||
const dataTransfer = event.dataTransfer;
|
||||
if (dataTransfer) {
|
||||
dataTransfer.effectAllowed = 'move';
|
||||
dataTransfer.setData('text/plain', filePaths.join(','));
|
||||
try {
|
||||
dataTransfer.setData('application/json', JSON.stringify({ filePaths }));
|
||||
} catch (error) {
|
||||
// Ignore serialization errors
|
||||
}
|
||||
}
|
||||
|
||||
card.classList.add('dragging');
|
||||
}
|
||||
|
||||
handleCardDragEnd(event) {
|
||||
const card = event.target.closest('.model-card');
|
||||
if (card) {
|
||||
card.classList.remove('dragging');
|
||||
}
|
||||
this.clearAllDropHighlights();
|
||||
this.resetDragState();
|
||||
}
|
||||
|
||||
getRootPathFromCard(card) {
|
||||
if (!card) return null;
|
||||
|
||||
const filePathRaw = card.dataset.filepath || '';
|
||||
const normalizedFilePath = filePathRaw.replace(/\\/g, '/');
|
||||
const lastSlashIndex = normalizedFilePath.lastIndexOf('/');
|
||||
if (lastSlashIndex === -1) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const directory = normalizedFilePath.substring(0, lastSlashIndex);
|
||||
let folderValue = card.dataset.folder;
|
||||
if (!folderValue || folderValue === 'undefined') {
|
||||
folderValue = '';
|
||||
}
|
||||
const normalizedFolder = folderValue.replace(/\\/g, '/').replace(/^\/+|\/+$/g, '');
|
||||
|
||||
if (!normalizedFolder) {
|
||||
return directory;
|
||||
}
|
||||
|
||||
const suffix = `/${normalizedFolder}`;
|
||||
if (directory.endsWith(suffix)) {
|
||||
return directory.slice(0, -suffix.length);
|
||||
}
|
||||
|
||||
return directory;
|
||||
}
|
||||
|
||||
combineRootAndRelativePath(root, relative) {
|
||||
const normalizedRoot = (root || '').replace(/\\/g, '/').replace(/\/+$/g, '');
|
||||
const normalizedRelative = (relative || '').replace(/\\/g, '/').replace(/^\/+|\/+$/g, '');
|
||||
|
||||
if (!normalizedRoot) {
|
||||
return normalizedRelative;
|
||||
}
|
||||
|
||||
if (!normalizedRelative) {
|
||||
return normalizedRoot;
|
||||
}
|
||||
|
||||
return `${normalizedRoot}/${normalizedRelative}`;
|
||||
}
|
||||
|
||||
getFolderElementFromEvent(event) {
|
||||
const folderTree = this.folderTreeElement || document.getElementById('sidebarFolderTree');
|
||||
if (!folderTree) return null;
|
||||
|
||||
const target = event.target instanceof Element ? event.target.closest('[data-path]') : null;
|
||||
if (!target || !folderTree.contains(target)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return target;
|
||||
}
|
||||
|
||||
setDropTargetHighlight(element, shouldAdd) {
|
||||
if (!element) return;
|
||||
|
||||
let targetElement = element;
|
||||
if (!targetElement.classList.contains('sidebar-tree-node-content') &&
|
||||
!targetElement.classList.contains('sidebar-node-content')) {
|
||||
targetElement = element.querySelector('.sidebar-tree-node-content, .sidebar-node-content');
|
||||
}
|
||||
|
||||
if (targetElement) {
|
||||
targetElement.classList.toggle('drop-target', shouldAdd);
|
||||
}
|
||||
}
|
||||
|
||||
handleFolderDragEnter(event) {
|
||||
if (!this.draggedFilePaths || this.draggedFilePaths.length === 0) return;
|
||||
|
||||
const folderElement = this.getFolderElementFromEvent(event);
|
||||
if (!folderElement) return;
|
||||
|
||||
event.preventDefault();
|
||||
|
||||
if (event.dataTransfer) {
|
||||
event.dataTransfer.dropEffect = 'move';
|
||||
}
|
||||
|
||||
this.setDropTargetHighlight(folderElement, true);
|
||||
this.currentDropTarget = folderElement;
|
||||
}
|
||||
|
||||
handleFolderDragOver(event) {
|
||||
if (!this.draggedFilePaths || this.draggedFilePaths.length === 0) return;
|
||||
|
||||
const folderElement = this.getFolderElementFromEvent(event);
|
||||
if (!folderElement) return;
|
||||
|
||||
event.preventDefault();
|
||||
|
||||
if (event.dataTransfer) {
|
||||
event.dataTransfer.dropEffect = 'move';
|
||||
}
|
||||
}
|
||||
|
||||
handleFolderDragLeave(event) {
|
||||
if (!this.draggedFilePaths || this.draggedFilePaths.length === 0) return;
|
||||
|
||||
const folderElement = this.getFolderElementFromEvent(event);
|
||||
if (!folderElement) return;
|
||||
|
||||
const relatedTarget = event.relatedTarget instanceof Element ? event.relatedTarget : null;
|
||||
if (!relatedTarget || !folderElement.contains(relatedTarget)) {
|
||||
this.setDropTargetHighlight(folderElement, false);
|
||||
if (this.currentDropTarget === folderElement) {
|
||||
this.currentDropTarget = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async handleFolderDrop(event) {
|
||||
if (!this.draggedFilePaths || this.draggedFilePaths.length === 0) return;
|
||||
|
||||
const folderElement = this.getFolderElementFromEvent(event);
|
||||
if (!folderElement) return;
|
||||
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
|
||||
this.setDropTargetHighlight(folderElement, false);
|
||||
this.currentDropTarget = null;
|
||||
|
||||
const targetPath = folderElement.dataset.path || '';
|
||||
|
||||
await this.performDragMove(targetPath);
|
||||
|
||||
this.resetDragState();
|
||||
this.clearAllDropHighlights();
|
||||
}
|
||||
|
||||
async performDragMove(targetRelativePath) {
|
||||
if (!this.draggedFilePaths || this.draggedFilePaths.length === 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!this.apiClient) {
|
||||
this.apiClient = getModelApiClient();
|
||||
}
|
||||
|
||||
const rootPath = this.draggedRootPath ? this.draggedRootPath.replace(/\\/g, '/') : '';
|
||||
if (!rootPath) {
|
||||
showToast(
|
||||
'toast.models.moveFailed',
|
||||
{ message: translate('sidebar.dragDrop.unableToResolveRoot', {}, 'Unable to determine destination path for move.') },
|
||||
'error'
|
||||
);
|
||||
return false;
|
||||
}
|
||||
|
||||
const destination = this.combineRootAndRelativePath(rootPath, targetRelativePath);
|
||||
const useBulkMove = this.draggedFromBulk || this.draggedFilePaths.length > 1;
|
||||
|
||||
try {
|
||||
if (useBulkMove) {
|
||||
await this.apiClient.moveBulkModels(this.draggedFilePaths, destination);
|
||||
} else {
|
||||
await this.apiClient.moveSingleModel(this.draggedFilePaths[0], destination);
|
||||
}
|
||||
|
||||
if (this.pageControls && typeof this.pageControls.resetAndReload === 'function') {
|
||||
await this.pageControls.resetAndReload(true);
|
||||
} else {
|
||||
await this.refresh();
|
||||
}
|
||||
|
||||
if (this.draggedFromBulk && state.bulkMode && typeof bulkManager?.toggleBulkMode === 'function') {
|
||||
bulkManager.toggleBulkMode();
|
||||
}
|
||||
|
||||
return true;
|
||||
} catch (error) {
|
||||
console.error('Error moving model(s) via drag-and-drop:', error);
|
||||
showToast('toast.models.moveFailed', { message: error.message || 'Unknown error' }, 'error');
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
resetDragState() {
|
||||
this.draggedFilePaths = null;
|
||||
this.draggedRootPath = null;
|
||||
this.draggedFromBulk = false;
|
||||
}
|
||||
|
||||
clearAllDropHighlights() {
|
||||
const highlighted = document.querySelectorAll('.sidebar-tree-node-content.drop-target, .sidebar-node-content.drop-target');
|
||||
highlighted.forEach((element) => element.classList.remove('drop-target'));
|
||||
this.currentDropTarget = null;
|
||||
}
|
||||
|
||||
async init() {
|
||||
@@ -154,6 +457,7 @@ export class SidebarManager {
|
||||
this.setInitialSidebarState();
|
||||
|
||||
this.setupEventHandlers();
|
||||
this.initializeDragAndDrop();
|
||||
this.updateSidebarTitle();
|
||||
this.restoreSidebarState();
|
||||
await this.loadFolderTree();
|
||||
@@ -197,7 +501,7 @@ export class SidebarManager {
|
||||
updateSidebarTitle() {
|
||||
const sidebarTitle = document.getElementById('sidebarTitle');
|
||||
if (sidebarTitle) {
|
||||
sidebarTitle.textContent = `${this.apiClient.apiConfig.config.displayName} Root`;
|
||||
sidebarTitle.textContent = translate('sidebar.modelRoot');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -220,6 +524,12 @@ export class SidebarManager {
|
||||
collapseAllBtn.addEventListener('click', this.handleCollapseAll);
|
||||
}
|
||||
|
||||
// Recursive toggle button
|
||||
const recursiveToggleBtn = document.getElementById('sidebarRecursiveToggle');
|
||||
if (recursiveToggleBtn) {
|
||||
recursiveToggleBtn.addEventListener('click', this.handleRecursiveToggle);
|
||||
}
|
||||
|
||||
// Tree click handler
|
||||
const folderTree = document.getElementById('sidebarFolderTree');
|
||||
if (folderTree) {
|
||||
@@ -451,6 +761,7 @@ export class SidebarManager {
|
||||
} else {
|
||||
this.renderFolderList();
|
||||
}
|
||||
this.initializeDragAndDrop();
|
||||
}
|
||||
|
||||
renderTree() {
|
||||
@@ -477,7 +788,7 @@ export class SidebarManager {
|
||||
|
||||
return `
|
||||
<div class="sidebar-tree-node" data-path="${currentPath}">
|
||||
<div class="sidebar-tree-node-content ${isSelected ? 'selected' : ''}">
|
||||
<div class="sidebar-tree-node-content ${isSelected ? 'selected' : ''}" data-path="${currentPath}">
|
||||
<div class="sidebar-tree-expand-icon ${isExpanded ? 'expanded' : ''}"
|
||||
style="${hasChildren ? '' : 'opacity: 0; pointer-events: none;'}">
|
||||
<i class="fas fa-chevron-right"></i>
|
||||
@@ -522,7 +833,7 @@ export class SidebarManager {
|
||||
|
||||
return `
|
||||
<div class="sidebar-folder-item ${isSelected ? 'selected' : ''}" data-path="${folder}">
|
||||
<div class="sidebar-node-content">
|
||||
<div class="sidebar-node-content" data-path="${folder}">
|
||||
<i class="fas fa-folder sidebar-folder-icon"></i>
|
||||
<div class="sidebar-folder-name" title="${displayName}">${displayName}</div>
|
||||
</div>
|
||||
@@ -645,11 +956,33 @@ export class SidebarManager {
|
||||
this.displayMode = this.displayMode === 'tree' ? 'list' : 'tree';
|
||||
this.updateDisplayModeButton();
|
||||
this.updateCollapseAllButton();
|
||||
this.updateRecursiveToggleButton();
|
||||
this.updateSearchRecursiveOption();
|
||||
this.saveDisplayMode();
|
||||
this.loadFolderTree(); // Reload with new display mode
|
||||
}
|
||||
|
||||
async handleRecursiveToggle(event) {
|
||||
event.stopPropagation();
|
||||
|
||||
if (this.displayMode !== 'tree') {
|
||||
return;
|
||||
}
|
||||
|
||||
this.recursiveSearchEnabled = !this.recursiveSearchEnabled;
|
||||
setStorageItem(`${this.pageType}_recursiveSearch`, this.recursiveSearchEnabled);
|
||||
this.updateSearchRecursiveOption();
|
||||
this.updateRecursiveToggleButton();
|
||||
|
||||
if (this.pageControls && typeof this.pageControls.resetAndReload === 'function') {
|
||||
try {
|
||||
await this.pageControls.resetAndReload(true);
|
||||
} catch (error) {
|
||||
console.error('Failed to reload models after toggling recursive search:', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
updateDisplayModeButton() {
|
||||
const displayModeBtn = document.getElementById('sidebarDisplayModeToggle');
|
||||
if (displayModeBtn) {
|
||||
@@ -679,8 +1012,35 @@ export class SidebarManager {
|
||||
}
|
||||
}
|
||||
|
||||
updateRecursiveToggleButton() {
|
||||
const recursiveToggleBtn = document.getElementById('sidebarRecursiveToggle');
|
||||
if (!recursiveToggleBtn) return;
|
||||
|
||||
const icon = recursiveToggleBtn.querySelector('i');
|
||||
const isTreeMode = this.displayMode === 'tree';
|
||||
const isActive = isTreeMode && this.recursiveSearchEnabled;
|
||||
|
||||
recursiveToggleBtn.classList.toggle('active', isActive);
|
||||
recursiveToggleBtn.classList.toggle('disabled', !isTreeMode);
|
||||
recursiveToggleBtn.setAttribute('aria-pressed', isActive ? 'true' : 'false');
|
||||
recursiveToggleBtn.setAttribute('aria-disabled', isTreeMode ? 'false' : 'true');
|
||||
|
||||
if (icon) {
|
||||
icon.className = 'fas fa-code-branch';
|
||||
}
|
||||
|
||||
if (!isTreeMode) {
|
||||
recursiveToggleBtn.title = translate('sidebar.recursiveUnavailable');
|
||||
} else if (this.recursiveSearchEnabled) {
|
||||
recursiveToggleBtn.title = translate('sidebar.recursiveOn');
|
||||
} else {
|
||||
recursiveToggleBtn.title = translate('sidebar.recursiveOff');
|
||||
}
|
||||
}
|
||||
|
||||
updateSearchRecursiveOption() {
|
||||
this.pageControls.pageState.searchOptions.recursive = this.displayMode === 'tree';
|
||||
const isRecursive = this.displayMode === 'tree' && this.recursiveSearchEnabled;
|
||||
this.pageControls.pageState.searchOptions.recursive = isRecursive;
|
||||
}
|
||||
|
||||
updateTreeSelection() {
|
||||
@@ -925,15 +1285,18 @@ export class SidebarManager {
|
||||
const isPinned = getStorageItem(`${this.pageType}_sidebarPinned`, true);
|
||||
const expandedPaths = getStorageItem(`${this.pageType}_expandedNodes`, []);
|
||||
const displayMode = getStorageItem(`${this.pageType}_displayMode`, 'tree'); // 'tree' or 'list', default to 'tree'
|
||||
const recursiveSearchEnabled = getStorageItem(`${this.pageType}_recursiveSearch`, true);
|
||||
|
||||
this.isPinned = isPinned;
|
||||
this.expandedNodes = new Set(expandedPaths);
|
||||
this.displayMode = displayMode;
|
||||
this.recursiveSearchEnabled = recursiveSearchEnabled;
|
||||
|
||||
this.updatePinButton();
|
||||
this.updateDisplayModeButton();
|
||||
this.updateCollapseAllButton();
|
||||
this.updateSearchRecursiveOption();
|
||||
this.updateRecursiveToggleButton();
|
||||
}
|
||||
|
||||
restoreSelectedFolder() {
|
||||
@@ -974,4 +1337,4 @@ export class SidebarManager {
|
||||
}
|
||||
|
||||
// Create and export global instance
|
||||
export const sidebarManager = new SidebarManager();
|
||||
export const sidebarManager = new SidebarManager();
|
||||
|
||||
@@ -11,6 +11,17 @@ import { showDeleteModal } from '../../utils/modalUtils.js';
|
||||
import { translate } from '../../utils/i18nHelpers.js';
|
||||
import { eventManager } from '../../utils/EventManager.js';
|
||||
|
||||
// Helper function to get display name based on settings
|
||||
function getDisplayName(model) {
|
||||
const displayNameSetting = state.global.settings.model_name_display || 'model_name';
|
||||
|
||||
if (displayNameSetting === 'file_name') {
|
||||
return model.file_name || model.model_name || 'Unknown Model';
|
||||
}
|
||||
|
||||
return model.model_name || model.file_name || 'Unknown Model';
|
||||
}
|
||||
|
||||
// Add global event delegation handlers using event manager
|
||||
export function setupModelCardEventDelegation(modelType) {
|
||||
// Remove any existing handler first
|
||||
@@ -364,11 +375,12 @@ function showExampleAccessModal(card, modelType) {
|
||||
export function createModelCard(model, modelType) {
|
||||
const card = document.createElement('div');
|
||||
card.className = 'model-card'; // Reuse the same class for styling
|
||||
card.draggable = true;
|
||||
card.dataset.sha256 = model.sha256;
|
||||
card.dataset.filepath = model.file_path;
|
||||
card.dataset.name = model.model_name;
|
||||
card.dataset.file_name = model.file_name;
|
||||
card.dataset.folder = model.folder;
|
||||
card.dataset.folder = model.folder || '';
|
||||
card.dataset.modified = model.modified;
|
||||
card.dataset.file_size = model.file_size;
|
||||
card.dataset.from_civitai = model.from_civitai;
|
||||
@@ -509,7 +521,7 @@ export function createModelCard(model, modelType) {
|
||||
` : ''}
|
||||
<div class="card-footer">
|
||||
<div class="model-info">
|
||||
<span class="model-name">${model.model_name}</span>
|
||||
<span class="model-name">${getDisplayName(model)}</span>
|
||||
${model.civitai?.name ? `<span class="version-name">${model.civitai.name}</span>` : ''}
|
||||
</div>
|
||||
<div class="card-actions">
|
||||
|
||||
@@ -236,7 +236,7 @@ export async function showModelModal(model, modelType) {
|
||||
setupShowcaseScroll(modalId);
|
||||
setupTabSwitching();
|
||||
setupTagTooltip();
|
||||
setupTagEditMode();
|
||||
setupTagEditMode(modelType);
|
||||
setupModelNameEditing(modelWithFullData.file_path);
|
||||
setupBaseModelEditing(modelWithFullData.file_path);
|
||||
setupFileNameEditing(modelWithFullData.file_path);
|
||||
@@ -480,4 +480,4 @@ const modelModal = {
|
||||
scrollToTop
|
||||
};
|
||||
|
||||
export { modelModal };
|
||||
export { modelModal };
|
||||
|
||||
@@ -4,7 +4,136 @@
|
||||
*/
|
||||
import { showToast } from '../../utils/uiHelpers.js';
|
||||
import { getModelApiClient } from '../../api/modelApiFactory.js';
|
||||
import { PRESET_TAGS } from '../../utils/constants.js';
|
||||
import { translate } from '../../utils/i18nHelpers.js';
|
||||
import { getPriorityTagSuggestions } from '../../utils/priorityTagHelpers.js';
|
||||
import { state } from '../../state/index.js';
|
||||
|
||||
const MODEL_TYPE_SUGGESTION_KEY_MAP = {
|
||||
loras: 'lora',
|
||||
lora: 'lora',
|
||||
checkpoints: 'checkpoint',
|
||||
checkpoint: 'checkpoint',
|
||||
embeddings: 'embedding',
|
||||
embedding: 'embedding',
|
||||
};
|
||||
const METADATA_ITEM_SELECTOR = '.metadata-item';
|
||||
const METADATA_ITEMS_CONTAINER_SELECTOR = '.metadata-items';
|
||||
const METADATA_ITEM_DRAGGING_CLASS = 'metadata-item-dragging';
|
||||
const METADATA_ITEM_PLACEHOLDER_CLASS = 'metadata-item-placeholder';
|
||||
const METADATA_ITEMS_SORTING_CLASS = 'metadata-items-sorting';
|
||||
const BODY_DRAGGING_CLASS = 'metadata-drag-active';
|
||||
|
||||
let activeModelTypeKey = '';
|
||||
let priorityTagSuggestions = [];
|
||||
let priorityTagSuggestionsLoaded = false;
|
||||
let priorityTagSuggestionsPromise = null;
|
||||
let activeTagDragState = null;
|
||||
|
||||
function normalizeModelTypeKey(modelType) {
|
||||
if (!modelType) {
|
||||
return '';
|
||||
}
|
||||
const lower = String(modelType).toLowerCase();
|
||||
if (MODEL_TYPE_SUGGESTION_KEY_MAP[lower]) {
|
||||
return MODEL_TYPE_SUGGESTION_KEY_MAP[lower];
|
||||
}
|
||||
if (lower.endsWith('s')) {
|
||||
return lower.slice(0, -1);
|
||||
}
|
||||
return lower;
|
||||
}
|
||||
|
||||
function resolveModelTypeKey(modelType = null) {
|
||||
if (modelType) {
|
||||
return normalizeModelTypeKey(modelType);
|
||||
}
|
||||
if (activeModelTypeKey) {
|
||||
return activeModelTypeKey;
|
||||
}
|
||||
if (state?.currentPageType) {
|
||||
return normalizeModelTypeKey(state.currentPageType);
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
function resetSuggestionState() {
|
||||
priorityTagSuggestions = [];
|
||||
priorityTagSuggestionsLoaded = false;
|
||||
priorityTagSuggestionsPromise = null;
|
||||
}
|
||||
|
||||
function setActiveModelTypeKey(modelType = null) {
|
||||
const resolvedKey = resolveModelTypeKey(modelType);
|
||||
if (resolvedKey === activeModelTypeKey) {
|
||||
return activeModelTypeKey;
|
||||
}
|
||||
activeModelTypeKey = resolvedKey;
|
||||
resetSuggestionState();
|
||||
return activeModelTypeKey;
|
||||
}
|
||||
|
||||
function ensurePriorityTagSuggestions(modelType = null) {
|
||||
if (modelType !== null && modelType !== undefined) {
|
||||
setActiveModelTypeKey(modelType);
|
||||
} else if (!activeModelTypeKey) {
|
||||
setActiveModelTypeKey();
|
||||
}
|
||||
|
||||
if (!activeModelTypeKey) {
|
||||
resetSuggestionState();
|
||||
priorityTagSuggestionsLoaded = true;
|
||||
return Promise.resolve([]);
|
||||
}
|
||||
|
||||
if (priorityTagSuggestionsLoaded && !priorityTagSuggestionsPromise) {
|
||||
return Promise.resolve(priorityTagSuggestions);
|
||||
}
|
||||
|
||||
if (!priorityTagSuggestionsPromise) {
|
||||
const requestKey = activeModelTypeKey;
|
||||
priorityTagSuggestionsPromise = getPriorityTagSuggestions(requestKey)
|
||||
.then((tags) => {
|
||||
if (activeModelTypeKey === requestKey) {
|
||||
priorityTagSuggestions = tags;
|
||||
priorityTagSuggestionsLoaded = true;
|
||||
}
|
||||
return tags;
|
||||
})
|
||||
.catch(() => {
|
||||
if (activeModelTypeKey === requestKey) {
|
||||
priorityTagSuggestions = [];
|
||||
priorityTagSuggestionsLoaded = true;
|
||||
}
|
||||
return [];
|
||||
})
|
||||
.finally(() => {
|
||||
if (activeModelTypeKey === requestKey) {
|
||||
priorityTagSuggestionsPromise = null;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
return priorityTagSuggestionsPromise;
|
||||
}
|
||||
|
||||
activeModelTypeKey = resolveModelTypeKey();
|
||||
|
||||
if (activeModelTypeKey) {
|
||||
ensurePriorityTagSuggestions();
|
||||
}
|
||||
|
||||
window.addEventListener('lm:priority-tags-updated', () => {
|
||||
if (!activeModelTypeKey) {
|
||||
return;
|
||||
}
|
||||
resetSuggestionState();
|
||||
ensurePriorityTagSuggestions().then(() => {
|
||||
document.querySelectorAll('.metadata-edit-container .metadata-suggestions-container').forEach((container) => {
|
||||
renderPriorityTagSuggestions(container, getCurrentEditTags());
|
||||
});
|
||||
updateSuggestionsDropdown();
|
||||
});
|
||||
});
|
||||
|
||||
// Create a named function so we can remove it later
|
||||
let saveTagsHandler = null;
|
||||
@@ -12,9 +141,12 @@ let saveTagsHandler = null;
|
||||
/**
|
||||
* Set up tag editing mode
|
||||
*/
|
||||
export function setupTagEditMode() {
|
||||
export function setupTagEditMode(modelType = null) {
|
||||
const editBtn = document.querySelector('.edit-tags-btn');
|
||||
if (!editBtn) return;
|
||||
|
||||
setActiveModelTypeKey(modelType);
|
||||
ensurePriorityTagSuggestions();
|
||||
|
||||
// Store original tags for restoring on cancel
|
||||
let originalTags = [];
|
||||
@@ -70,6 +202,7 @@ export function setupTagEditMode() {
|
||||
|
||||
// Setup delete buttons for existing tags
|
||||
setupDeleteButtons();
|
||||
setupTagDragAndDrop();
|
||||
|
||||
// Transfer click event from original button to the cloned one
|
||||
const newEditBtn = editContainer.querySelector('.metadata-header-btn');
|
||||
@@ -85,6 +218,7 @@ export function setupTagEditMode() {
|
||||
// Just show the existing edit container
|
||||
tagsEditContainer.style.display = 'block';
|
||||
editBtn.style.display = 'none';
|
||||
setupTagDragAndDrop();
|
||||
}
|
||||
} else {
|
||||
// Exit edit mode
|
||||
@@ -260,7 +394,7 @@ function createTagEditUI(currentTags, editBtnHTML = '') {
|
||||
function createSuggestionsDropdown(existingTags = []) {
|
||||
const dropdown = document.createElement('div');
|
||||
dropdown.className = 'metadata-suggestions-dropdown';
|
||||
|
||||
|
||||
// Create header
|
||||
const header = document.createElement('div');
|
||||
header.className = 'metadata-suggestions-header';
|
||||
@@ -273,11 +407,33 @@ function createSuggestionsDropdown(existingTags = []) {
|
||||
// Create tag container
|
||||
const container = document.createElement('div');
|
||||
container.className = 'metadata-suggestions-container';
|
||||
|
||||
// Add each preset tag as a suggestion
|
||||
PRESET_TAGS.forEach(tag => {
|
||||
if (priorityTagSuggestionsLoaded && !priorityTagSuggestionsPromise) {
|
||||
renderPriorityTagSuggestions(container, existingTags);
|
||||
} else {
|
||||
container.innerHTML = `<div class="metadata-suggestions-loading">${translate('settings.priorityTags.loadingSuggestions', 'Loading suggestions…')}</div>`;
|
||||
ensurePriorityTagSuggestions().then(() => {
|
||||
if (!container.isConnected) {
|
||||
return;
|
||||
}
|
||||
renderPriorityTagSuggestions(container, getCurrentEditTags());
|
||||
updateSuggestionsDropdown();
|
||||
}).catch(() => {
|
||||
if (container.isConnected) {
|
||||
container.innerHTML = '';
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
dropdown.appendChild(container);
|
||||
return dropdown;
|
||||
}
|
||||
|
||||
function renderPriorityTagSuggestions(container, existingTags = []) {
|
||||
container.innerHTML = '';
|
||||
|
||||
priorityTagSuggestions.forEach((tag) => {
|
||||
const isAdded = existingTags.includes(tag);
|
||||
|
||||
|
||||
const item = document.createElement('div');
|
||||
item.className = `metadata-suggestion-item ${isAdded ? 'already-added' : ''}`;
|
||||
item.title = tag;
|
||||
@@ -285,28 +441,21 @@ function createSuggestionsDropdown(existingTags = []) {
|
||||
<span class="metadata-suggestion-text">${tag}</span>
|
||||
${isAdded ? '<span class="added-indicator"><i class="fas fa-check"></i></span>' : ''}
|
||||
`;
|
||||
|
||||
|
||||
if (!isAdded) {
|
||||
item.addEventListener('click', () => {
|
||||
addNewTag(tag);
|
||||
|
||||
// Also populate the input field for potential editing
|
||||
|
||||
const input = document.querySelector('.metadata-input');
|
||||
if (input) input.value = tag;
|
||||
|
||||
// Focus on the input
|
||||
if (input) input.focus();
|
||||
|
||||
// Update dropdown without removing it
|
||||
|
||||
updateSuggestionsDropdown();
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
container.appendChild(item);
|
||||
});
|
||||
|
||||
dropdown.appendChild(container);
|
||||
return dropdown;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -342,6 +491,213 @@ function setupDeleteButtons() {
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Enable drag-and-drop sorting for tag items
|
||||
*/
|
||||
function setupTagDragAndDrop() {
|
||||
const container = document.querySelector(METADATA_ITEMS_CONTAINER_SELECTOR);
|
||||
if (!container) {
|
||||
return;
|
||||
}
|
||||
|
||||
container.querySelectorAll(METADATA_ITEM_SELECTOR).forEach((item) => {
|
||||
item.removeAttribute('draggable');
|
||||
if (item.classList.contains(METADATA_ITEM_PLACEHOLDER_CLASS)) {
|
||||
return;
|
||||
}
|
||||
if (item.dataset.pointerDragInit === 'true') {
|
||||
return;
|
||||
}
|
||||
|
||||
item.addEventListener('pointerdown', handleTagPointerDown);
|
||||
item.dataset.pointerDragInit = 'true';
|
||||
});
|
||||
}
|
||||
|
||||
function handleTagPointerDown(event) {
|
||||
if (event.button !== 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.target.closest('.metadata-delete-btn')) {
|
||||
return;
|
||||
}
|
||||
|
||||
const item = event.currentTarget;
|
||||
const container = item?.closest(METADATA_ITEMS_CONTAINER_SELECTOR);
|
||||
if (!item || !container) {
|
||||
return;
|
||||
}
|
||||
|
||||
event.preventDefault();
|
||||
startPointerDrag({ item, container, startEvent: event });
|
||||
}
|
||||
|
||||
function startPointerDrag({ item, container, startEvent }) {
|
||||
if (activeTagDragState) {
|
||||
finishPointerDrag();
|
||||
}
|
||||
|
||||
const itemRect = item.getBoundingClientRect();
|
||||
const placeholder = document.createElement('div');
|
||||
placeholder.className = `metadata-item ${METADATA_ITEM_PLACEHOLDER_CLASS}`;
|
||||
placeholder.style.width = `${itemRect.width}px`;
|
||||
placeholder.style.height = `${itemRect.height}px`;
|
||||
|
||||
container.insertBefore(placeholder, item);
|
||||
|
||||
item.classList.add(METADATA_ITEM_DRAGGING_CLASS);
|
||||
item.style.width = `${itemRect.width}px`;
|
||||
item.style.height = `${itemRect.height}px`;
|
||||
item.style.position = 'fixed';
|
||||
item.style.left = `${itemRect.left}px`;
|
||||
item.style.top = `${itemRect.top}px`;
|
||||
item.style.pointerEvents = 'none';
|
||||
item.style.zIndex = '1000';
|
||||
|
||||
container.classList.add(METADATA_ITEMS_SORTING_CLASS);
|
||||
if (document.body) {
|
||||
document.body.classList.add(BODY_DRAGGING_CLASS);
|
||||
}
|
||||
|
||||
const dragState = {
|
||||
container,
|
||||
item,
|
||||
placeholder,
|
||||
offsetX: startEvent.clientX - itemRect.left,
|
||||
offsetY: startEvent.clientY - itemRect.top,
|
||||
lastKnownPointer: { x: startEvent.clientX, y: startEvent.clientY },
|
||||
rafId: null,
|
||||
};
|
||||
|
||||
activeTagDragState = dragState;
|
||||
|
||||
document.addEventListener('pointermove', handlePointerMove);
|
||||
document.addEventListener('pointerup', handlePointerUp);
|
||||
document.addEventListener('pointercancel', handlePointerUp);
|
||||
}
|
||||
|
||||
function handlePointerMove(event) {
|
||||
if (!activeTagDragState) {
|
||||
return;
|
||||
}
|
||||
|
||||
activeTagDragState.lastKnownPointer = { x: event.clientX, y: event.clientY };
|
||||
|
||||
if (activeTagDragState.rafId !== null) {
|
||||
return;
|
||||
}
|
||||
|
||||
activeTagDragState.rafId = requestAnimationFrame(() => {
|
||||
if (!activeTagDragState) {
|
||||
return;
|
||||
}
|
||||
activeTagDragState.rafId = null;
|
||||
updateDraggingItemPosition();
|
||||
updatePlaceholderPosition();
|
||||
});
|
||||
}
|
||||
|
||||
function handlePointerUp() {
|
||||
finishPointerDrag();
|
||||
}
|
||||
|
||||
function updateDraggingItemPosition() {
|
||||
if (!activeTagDragState) {
|
||||
return;
|
||||
}
|
||||
|
||||
const { item, offsetX, offsetY, lastKnownPointer } = activeTagDragState;
|
||||
const left = lastKnownPointer.x - offsetX;
|
||||
const top = lastKnownPointer.y - offsetY;
|
||||
item.style.left = `${left}px`;
|
||||
item.style.top = `${top}px`;
|
||||
}
|
||||
|
||||
function updatePlaceholderPosition() {
|
||||
if (!activeTagDragState) {
|
||||
return;
|
||||
}
|
||||
|
||||
const { container, placeholder, item, lastKnownPointer } = activeTagDragState;
|
||||
const siblings = Array.from(
|
||||
container.querySelectorAll(
|
||||
`${METADATA_ITEM_SELECTOR}:not(.${METADATA_ITEM_PLACEHOLDER_CLASS})`
|
||||
)
|
||||
).filter((element) => element !== item);
|
||||
|
||||
let insertAfter = null;
|
||||
|
||||
for (const sibling of siblings) {
|
||||
const rect = sibling.getBoundingClientRect();
|
||||
|
||||
if (lastKnownPointer.y < rect.top) {
|
||||
container.insertBefore(placeholder, sibling);
|
||||
return;
|
||||
}
|
||||
|
||||
if (lastKnownPointer.y <= rect.bottom) {
|
||||
if (lastKnownPointer.x < rect.left + rect.width / 2) {
|
||||
container.insertBefore(placeholder, sibling);
|
||||
return;
|
||||
}
|
||||
insertAfter = sibling;
|
||||
continue;
|
||||
}
|
||||
|
||||
insertAfter = sibling;
|
||||
}
|
||||
|
||||
if (!insertAfter) {
|
||||
container.insertBefore(placeholder, container.firstElementChild);
|
||||
return;
|
||||
}
|
||||
|
||||
container.insertBefore(placeholder, insertAfter.nextSibling);
|
||||
}
|
||||
|
||||
function finishPointerDrag() {
|
||||
if (!activeTagDragState) {
|
||||
return;
|
||||
}
|
||||
|
||||
const { container, item, placeholder, rafId } = activeTagDragState;
|
||||
|
||||
document.removeEventListener('pointermove', handlePointerMove);
|
||||
document.removeEventListener('pointerup', handlePointerUp);
|
||||
document.removeEventListener('pointercancel', handlePointerUp);
|
||||
|
||||
container.classList.remove(METADATA_ITEMS_SORTING_CLASS);
|
||||
if (document.body) {
|
||||
document.body.classList.remove(BODY_DRAGGING_CLASS);
|
||||
}
|
||||
|
||||
if (rafId !== null) {
|
||||
cancelAnimationFrame(rafId);
|
||||
activeTagDragState.rafId = null;
|
||||
updateDraggingItemPosition();
|
||||
updatePlaceholderPosition();
|
||||
}
|
||||
|
||||
if (placeholder && placeholder.parentNode === container) {
|
||||
container.insertBefore(item, placeholder);
|
||||
container.removeChild(placeholder);
|
||||
}
|
||||
|
||||
item.classList.remove(METADATA_ITEM_DRAGGING_CLASS);
|
||||
item.style.position = '';
|
||||
item.style.width = '';
|
||||
item.style.height = '';
|
||||
item.style.left = '';
|
||||
item.style.top = '';
|
||||
item.style.pointerEvents = '';
|
||||
item.style.zIndex = '';
|
||||
|
||||
activeTagDragState = null;
|
||||
|
||||
updateSuggestionsDropdown();
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a new tag
|
||||
* @param {string} tag - Tag to add
|
||||
@@ -395,6 +751,7 @@ function addNewTag(tag) {
|
||||
});
|
||||
|
||||
tagsContainer.appendChild(newTag);
|
||||
setupTagDragAndDrop();
|
||||
|
||||
// Update status of items in the suggestions dropdown
|
||||
updateSuggestionsDropdown();
|
||||
@@ -406,10 +763,9 @@ function addNewTag(tag) {
|
||||
function updateSuggestionsDropdown() {
|
||||
const dropdown = document.querySelector('.metadata-suggestions-dropdown');
|
||||
if (!dropdown) return;
|
||||
|
||||
|
||||
// Get all current tags
|
||||
const currentTags = document.querySelectorAll('.metadata-item');
|
||||
const existingTags = Array.from(currentTags).map(tag => tag.dataset.tag);
|
||||
const existingTags = getCurrentEditTags();
|
||||
|
||||
// Update status of each item in dropdown
|
||||
dropdown.querySelectorAll('.metadata-suggestion-item').forEach(item => {
|
||||
@@ -456,6 +812,15 @@ function updateSuggestionsDropdown() {
|
||||
});
|
||||
}
|
||||
|
||||
function getCurrentEditTags() {
|
||||
const currentTags = document.querySelectorAll(
|
||||
`${METADATA_ITEM_SELECTOR}[data-tag]`
|
||||
);
|
||||
return Array.from(currentTags)
|
||||
.map(tag => tag.dataset.tag)
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
/**
|
||||
* Restore original tags when canceling edit
|
||||
* @param {HTMLElement} section - The tags section
|
||||
@@ -464,4 +829,4 @@ function updateSuggestionsDropdown() {
|
||||
function restoreOriginalTags(section, originalTags) {
|
||||
// Nothing to do here as we're just hiding the edit UI
|
||||
// and showing the original compact tags which weren't modified
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,8 @@ import { updateCardsForBulkMode } from '../components/shared/ModelCard.js';
|
||||
import { modalManager } from './ModalManager.js';
|
||||
import { getModelApiClient, resetAndReload } from '../api/modelApiFactory.js';
|
||||
import { MODEL_TYPES, MODEL_CONFIG } from '../api/apiConfig.js';
|
||||
import { PRESET_TAGS, BASE_MODEL_CATEGORIES } from '../utils/constants.js';
|
||||
import { BASE_MODEL_CATEGORIES } from '../utils/constants.js';
|
||||
import { getPriorityTagSuggestions } from '../utils/priorityTagHelpers.js';
|
||||
import { eventManager } from '../utils/EventManager.js';
|
||||
import { translate } from '../utils/i18nHelpers.js';
|
||||
|
||||
@@ -59,6 +60,26 @@ export class BulkManager {
|
||||
setContentRating: true
|
||||
}
|
||||
};
|
||||
|
||||
window.addEventListener('lm:priority-tags-updated', () => {
|
||||
const container = document.querySelector('#bulkAddTagsModal .metadata-suggestions-container');
|
||||
if (!container) {
|
||||
return;
|
||||
}
|
||||
const currentType = state.currentPageType;
|
||||
if (!currentType || currentType === 'recipes') {
|
||||
return;
|
||||
}
|
||||
getPriorityTagSuggestions(currentType).then((tags) => {
|
||||
if (!container.isConnected) {
|
||||
return;
|
||||
}
|
||||
this.renderBulkSuggestionItems(container, tags);
|
||||
this.updateBulkSuggestionsDropdown();
|
||||
}).catch(() => {
|
||||
// Ignore refresh failures; UI will retry on next open
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
initialize() {
|
||||
@@ -565,7 +586,7 @@ export class BulkManager {
|
||||
// Create suggestions dropdown
|
||||
const tagForm = document.querySelector('#bulkAddTagsModal .metadata-add-form');
|
||||
if (tagForm) {
|
||||
const suggestionsDropdown = this.createBulkSuggestionsDropdown(PRESET_TAGS);
|
||||
const suggestionsDropdown = this.createBulkSuggestionsDropdown();
|
||||
tagForm.appendChild(suggestionsDropdown);
|
||||
}
|
||||
|
||||
@@ -586,10 +607,10 @@ export class BulkManager {
|
||||
}
|
||||
}
|
||||
|
||||
createBulkSuggestionsDropdown(presetTags) {
|
||||
createBulkSuggestionsDropdown() {
|
||||
const dropdown = document.createElement('div');
|
||||
dropdown.className = 'metadata-suggestions-dropdown';
|
||||
|
||||
|
||||
const header = document.createElement('div');
|
||||
header.className = 'metadata-suggestions-header';
|
||||
header.innerHTML = `
|
||||
@@ -597,15 +618,39 @@ export class BulkManager {
|
||||
<small>Click to add</small>
|
||||
`;
|
||||
dropdown.appendChild(header);
|
||||
|
||||
|
||||
const container = document.createElement('div');
|
||||
container.className = 'metadata-suggestions-container';
|
||||
|
||||
presetTags.forEach(tag => {
|
||||
// Check if tag is already added
|
||||
container.innerHTML = `<div class="metadata-suggestions-loading">${translate('settings.priorityTags.loadingSuggestions', 'Loading suggestions…')}</div>`;
|
||||
|
||||
const currentType = state.currentPageType;
|
||||
if (!currentType || currentType === 'recipes') {
|
||||
container.innerHTML = '';
|
||||
} else {
|
||||
getPriorityTagSuggestions(currentType).then((tags) => {
|
||||
if (!container.isConnected) {
|
||||
return;
|
||||
}
|
||||
this.renderBulkSuggestionItems(container, tags);
|
||||
this.updateBulkSuggestionsDropdown();
|
||||
}).catch(() => {
|
||||
if (container.isConnected) {
|
||||
container.innerHTML = '';
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
dropdown.appendChild(container);
|
||||
return dropdown;
|
||||
}
|
||||
|
||||
renderBulkSuggestionItems(container, tags) {
|
||||
container.innerHTML = '';
|
||||
|
||||
tags.forEach(tag => {
|
||||
const existingTags = this.getBulkExistingTags();
|
||||
const isAdded = existingTags.includes(tag);
|
||||
|
||||
|
||||
const item = document.createElement('div');
|
||||
item.className = `metadata-suggestion-item ${isAdded ? 'already-added' : ''}`;
|
||||
item.title = tag;
|
||||
@@ -613,7 +658,7 @@ export class BulkManager {
|
||||
<span class="metadata-suggestion-text">${tag}</span>
|
||||
${isAdded ? '<span class="added-indicator"><i class="fas fa-check"></i></span>' : ''}
|
||||
`;
|
||||
|
||||
|
||||
if (!isAdded) {
|
||||
item.addEventListener('click', () => {
|
||||
this.addBulkTag(tag);
|
||||
@@ -622,16 +667,12 @@ export class BulkManager {
|
||||
input.value = tag;
|
||||
input.focus();
|
||||
}
|
||||
// Update dropdown to show added indicator
|
||||
this.updateBulkSuggestionsDropdown();
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
container.appendChild(item);
|
||||
});
|
||||
|
||||
dropdown.appendChild(container);
|
||||
return dropdown;
|
||||
}
|
||||
|
||||
addBulkTag(tag) {
|
||||
@@ -1105,10 +1146,7 @@ export class BulkManager {
|
||||
// Call the auto-organize method with selected file paths
|
||||
await apiClient.autoOrganizeModels(filePaths);
|
||||
|
||||
setTimeout(() => {
|
||||
resetAndReload(true);
|
||||
}, 1000);
|
||||
|
||||
resetAndReload(true);
|
||||
} catch (error) {
|
||||
console.error('Error during bulk auto-organize:', error);
|
||||
showToast('toast.loras.autoOrganizeFailed', { error: error.message }, 'error');
|
||||
|
||||
@@ -14,6 +14,7 @@ export class DownloadManager {
|
||||
this.modelInfo = null;
|
||||
this.modelVersionId = null;
|
||||
this.modelId = null;
|
||||
this.source = null;
|
||||
|
||||
this.initialized = false;
|
||||
this.selectedFolder = '';
|
||||
@@ -126,6 +127,7 @@ export class DownloadManager {
|
||||
this.modelInfo = null;
|
||||
this.modelId = null;
|
||||
this.modelVersionId = null;
|
||||
this.source = null;
|
||||
|
||||
this.selectedFolder = '';
|
||||
|
||||
@@ -150,7 +152,7 @@ export class DownloadManager {
|
||||
throw new Error(translate('modals.download.errors.invalidUrl'));
|
||||
}
|
||||
|
||||
this.versions = await this.apiClient.fetchCivitaiVersions(this.modelId);
|
||||
this.versions = await this.apiClient.fetchCivitaiVersions(this.modelId, this.source);
|
||||
|
||||
if (!this.versions.length) {
|
||||
throw new Error(translate('modals.download.errors.noVersions'));
|
||||
@@ -170,13 +172,22 @@ export class DownloadManager {
|
||||
}
|
||||
|
||||
extractModelId(url) {
|
||||
const modelMatch = url.match(/civitai\.com\/models\/(\d+)/);
|
||||
const versionMatch = url.match(/modelVersionId=(\d+)/);
|
||||
|
||||
if (modelMatch) {
|
||||
this.modelVersionId = versionMatch ? versionMatch[1] : null;
|
||||
return modelMatch[1];
|
||||
const versionMatch = url.match(/modelVersionId=(\d+)/i);
|
||||
this.modelVersionId = versionMatch ? versionMatch[1] : null;
|
||||
|
||||
const civarchiveMatch = url.match(/https?:\/\/(?:www\.)?(?:civitaiarchive|civarchive)\.com\/models\/(\d+)/i);
|
||||
if (civarchiveMatch) {
|
||||
this.source = 'civarchive';
|
||||
return civarchiveMatch[1];
|
||||
}
|
||||
|
||||
const civitaiMatch = url.match(/https?:\/\/(?:www\.)?civitai\.com\/models\/(\d+)/i);
|
||||
if (civitaiMatch) {
|
||||
this.source = null;
|
||||
return civitaiMatch[1];
|
||||
}
|
||||
|
||||
this.source = null;
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -453,7 +464,13 @@ export class DownloadManager {
|
||||
}
|
||||
|
||||
if (data.status === 'progress' && data.download_id === downloadId) {
|
||||
updateProgress(data.progress, 0, this.currentVersion.name);
|
||||
const metrics = {
|
||||
bytesDownloaded: data.bytes_downloaded,
|
||||
totalBytes: data.total_bytes,
|
||||
bytesPerSecond: data.bytes_per_second
|
||||
};
|
||||
|
||||
updateProgress(data.progress, 0, this.currentVersion.name, metrics);
|
||||
|
||||
if (data.progress < 3) {
|
||||
this.loadingManager.setStatus(translate('modals.download.status.preparing'));
|
||||
@@ -478,7 +495,8 @@ export class DownloadManager {
|
||||
modelRoot,
|
||||
targetFolder,
|
||||
useDefaultPaths,
|
||||
downloadId
|
||||
downloadId,
|
||||
this.source
|
||||
);
|
||||
|
||||
showToast('toast.loras.downloadCompleted', {}, 'success');
|
||||
|
||||
@@ -13,8 +13,10 @@ export class ExampleImagesManager {
|
||||
this.progressPanel = null;
|
||||
this.isProgressPanelCollapsed = false;
|
||||
this.pauseButton = null; // Store reference to the pause button
|
||||
this.stopButton = null;
|
||||
this.isMigrating = false; // Track migration state separately from downloading
|
||||
this.hasShownCompletionToast = false; // Flag to track if completion toast has been shown
|
||||
this.isStopping = false;
|
||||
|
||||
// Auto download properties
|
||||
this.autoDownloadInterval = null;
|
||||
@@ -52,11 +54,16 @@ export class ExampleImagesManager {
|
||||
|
||||
// Initialize progress panel button handlers
|
||||
this.pauseButton = document.getElementById('pauseExampleDownloadBtn');
|
||||
this.stopButton = document.getElementById('stopExampleDownloadBtn');
|
||||
const collapseBtn = document.getElementById('collapseProgressBtn');
|
||||
|
||||
|
||||
if (this.pauseButton) {
|
||||
this.pauseButton.onclick = () => this.pauseDownload();
|
||||
}
|
||||
|
||||
if (this.stopButton) {
|
||||
this.stopButton.onclick = () => this.stopDownload();
|
||||
}
|
||||
|
||||
if (collapseBtn) {
|
||||
collapseBtn.onclick = () => this.toggleProgressPanel();
|
||||
@@ -210,10 +217,14 @@ export class ExampleImagesManager {
|
||||
updateDownloadButtonText() {
|
||||
const btnTextElement = document.getElementById('exampleDownloadBtnText');
|
||||
if (btnTextElement) {
|
||||
if (this.isDownloading && this.isPaused) {
|
||||
if (this.isStopping) {
|
||||
btnTextElement.textContent = "Stopping...";
|
||||
} else if (this.isDownloading && this.isPaused) {
|
||||
btnTextElement.textContent = "Resume";
|
||||
} else if (!this.isDownloading) {
|
||||
btnTextElement.textContent = "Download";
|
||||
} else {
|
||||
btnTextElement.textContent = "Download";
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -239,18 +250,22 @@ export class ExampleImagesManager {
|
||||
});
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
|
||||
if (data.success) {
|
||||
this.isDownloading = true;
|
||||
this.isPaused = false;
|
||||
this.isStopping = false;
|
||||
this.hasShownCompletionToast = false; // Reset toast flag when starting new download
|
||||
this.startTime = new Date();
|
||||
this.updateUI(data.status);
|
||||
this.showProgressPanel();
|
||||
this.startProgressUpdates();
|
||||
this.updateDownloadButtonText();
|
||||
if (this.stopButton) {
|
||||
this.stopButton.disabled = false;
|
||||
}
|
||||
showToast('toast.exampleImages.downloadStarted', {}, 'success');
|
||||
|
||||
|
||||
// Close settings modal
|
||||
modalManager.closeModal('settingsModal');
|
||||
} else {
|
||||
@@ -263,7 +278,7 @@ export class ExampleImagesManager {
|
||||
}
|
||||
|
||||
async pauseDownload() {
|
||||
if (!this.isDownloading || this.isPaused) {
|
||||
if (!this.isDownloading || this.isPaused || this.isStopping) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -299,21 +314,21 @@ export class ExampleImagesManager {
|
||||
}
|
||||
|
||||
async resumeDownload() {
|
||||
if (!this.isDownloading || !this.isPaused) {
|
||||
if (!this.isDownloading || !this.isPaused || this.isStopping) {
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
try {
|
||||
const response = await fetch('/api/lm/resume-example-images', {
|
||||
method: 'POST'
|
||||
});
|
||||
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
|
||||
if (data.success) {
|
||||
this.isPaused = false;
|
||||
document.getElementById('downloadStatusText').textContent = 'Downloading';
|
||||
|
||||
|
||||
// Only update the icon element, not the entire innerHTML
|
||||
if (this.pauseButton) {
|
||||
const iconElement = this.pauseButton.querySelector('i');
|
||||
@@ -322,7 +337,7 @@ export class ExampleImagesManager {
|
||||
}
|
||||
this.pauseButton.onclick = () => this.pauseDownload();
|
||||
}
|
||||
|
||||
|
||||
this.updateDownloadButtonText();
|
||||
showToast('toast.exampleImages.downloadResumed', {}, 'success');
|
||||
} else {
|
||||
@@ -333,6 +348,60 @@ export class ExampleImagesManager {
|
||||
showToast('toast.exampleImages.resumeFailed', {}, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
async stopDownload() {
|
||||
if (this.isStopping) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!this.isDownloading) {
|
||||
this.hideProgressPanel();
|
||||
return;
|
||||
}
|
||||
|
||||
this.isStopping = true;
|
||||
this.isPaused = false;
|
||||
this.updateDownloadButtonText();
|
||||
|
||||
if (this.stopButton) {
|
||||
this.stopButton.disabled = true;
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch('/api/lm/stop-example-images', {
|
||||
method: 'POST'
|
||||
});
|
||||
|
||||
let data;
|
||||
try {
|
||||
data = await response.json();
|
||||
} catch (parseError) {
|
||||
data = { success: false, error: 'Invalid server response' };
|
||||
}
|
||||
|
||||
if (response.ok && data.success) {
|
||||
showToast('toast.exampleImages.downloadStopped', {}, 'info');
|
||||
this.hideProgressPanel();
|
||||
} else {
|
||||
this.isStopping = false;
|
||||
if (this.stopButton) {
|
||||
this.stopButton.disabled = false;
|
||||
}
|
||||
const errorMessage = data && data.error ? data.error : 'Unknown error';
|
||||
showToast('toast.exampleImages.stopFailed', { error: errorMessage }, 'error');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to stop download:', error);
|
||||
this.isStopping = false;
|
||||
if (this.stopButton) {
|
||||
this.stopButton.disabled = false;
|
||||
}
|
||||
const errorMessage = error && error.message ? error.message : 'Unknown error';
|
||||
showToast('toast.exampleImages.stopFailed', { error: errorMessage }, 'error');
|
||||
} finally {
|
||||
this.updateDownloadButtonText();
|
||||
}
|
||||
}
|
||||
|
||||
startProgressUpdates() {
|
||||
// Clear any existing interval
|
||||
@@ -352,21 +421,36 @@ export class ExampleImagesManager {
|
||||
const data = await response.json();
|
||||
|
||||
if (data.success) {
|
||||
const currentStatus = data.status.status;
|
||||
this.isDownloading = data.is_downloading;
|
||||
this.isPaused = data.status.status === 'paused';
|
||||
this.isPaused = currentStatus === 'paused';
|
||||
this.isMigrating = data.is_migrating || false;
|
||||
|
||||
|
||||
if (currentStatus === 'stopping') {
|
||||
this.isStopping = true;
|
||||
} else if (
|
||||
!data.is_downloading ||
|
||||
currentStatus === 'stopped' ||
|
||||
currentStatus === 'completed' ||
|
||||
currentStatus === 'error'
|
||||
) {
|
||||
this.isStopping = false;
|
||||
}
|
||||
|
||||
// Update download button text
|
||||
this.updateDownloadButtonText();
|
||||
|
||||
|
||||
if (this.isDownloading) {
|
||||
this.updateUI(data.status);
|
||||
} else {
|
||||
// Download completed or failed
|
||||
clearInterval(this.progressUpdateInterval);
|
||||
this.progressUpdateInterval = null;
|
||||
|
||||
if (data.status.status === 'completed' && !this.hasShownCompletionToast) {
|
||||
if (this.stopButton) {
|
||||
this.stopButton.disabled = true;
|
||||
}
|
||||
|
||||
if (currentStatus === 'completed' && !this.hasShownCompletionToast) {
|
||||
const actionType = this.isMigrating ? 'migration' : 'download';
|
||||
showToast('toast.downloads.imagesCompleted', { action: actionType }, 'success');
|
||||
// Mark as shown to prevent duplicate toasts
|
||||
@@ -375,10 +459,13 @@ export class ExampleImagesManager {
|
||||
this.isMigrating = false;
|
||||
// Hide the panel after a delay
|
||||
setTimeout(() => this.hideProgressPanel(), 5000);
|
||||
} else if (data.status.status === 'error') {
|
||||
} else if (currentStatus === 'error') {
|
||||
const actionType = this.isMigrating ? 'migration' : 'download';
|
||||
showToast('toast.downloads.imagesFailed', { action: actionType }, 'error');
|
||||
this.isMigrating = false;
|
||||
} else if (currentStatus === 'stopped') {
|
||||
this.hideProgressPanel();
|
||||
this.isMigrating = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -434,7 +521,11 @@ export class ExampleImagesManager {
|
||||
if (!this.pauseButton) {
|
||||
this.pauseButton = document.getElementById('pauseExampleDownloadBtn');
|
||||
}
|
||||
|
||||
|
||||
if (!this.stopButton) {
|
||||
this.stopButton = document.getElementById('stopExampleDownloadBtn');
|
||||
}
|
||||
|
||||
if (this.pauseButton) {
|
||||
// Check if the button already has the SVG elements
|
||||
let hasProgressElements = !!this.pauseButton.querySelector('.mini-progress-circle');
|
||||
@@ -456,12 +547,14 @@ export class ExampleImagesManager {
|
||||
iconElement.className = status.status === 'paused' ? 'fas fa-play' : 'fas fa-pause';
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Update click handler
|
||||
this.pauseButton.onclick = status.status === 'paused'
|
||||
? () => this.resumeDownload()
|
||||
this.pauseButton.onclick = status.status === 'paused'
|
||||
? () => this.resumeDownload()
|
||||
: () => this.pauseDownload();
|
||||
|
||||
|
||||
this.pauseButton.disabled = ['completed', 'error', 'stopped'].includes(status.status) || status.status === 'stopping';
|
||||
|
||||
// Update progress immediately
|
||||
const progressBar = document.getElementById('downloadProgressBar');
|
||||
if (progressBar) {
|
||||
@@ -469,6 +562,15 @@ export class ExampleImagesManager {
|
||||
this.updateMiniProgress(progressPercent);
|
||||
}
|
||||
}
|
||||
|
||||
if (this.stopButton) {
|
||||
if (status.status === 'stopping' || this.isStopping) {
|
||||
this.stopButton.disabled = true;
|
||||
} else {
|
||||
const canStop = ['running', 'paused'].includes(status.status);
|
||||
this.stopButton.disabled = !canStop;
|
||||
}
|
||||
}
|
||||
|
||||
// Update title text
|
||||
const titleElement = document.querySelector('.progress-panel-title');
|
||||
@@ -584,6 +686,8 @@ export class ExampleImagesManager {
|
||||
case 'paused': return 'Paused';
|
||||
case 'completed': return 'Completed';
|
||||
case 'error': return 'Error';
|
||||
case 'stopping': return 'Stopping';
|
||||
case 'stopped': return 'Stopped';
|
||||
default: return 'Initializing';
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ import { getStorageItem, setStorageItem } from '../utils/storageHelpers.js';
|
||||
export class HelpManager {
|
||||
constructor() {
|
||||
this.lastViewedTimestamp = getStorageItem('help_last_viewed', 0);
|
||||
this.latestContentTimestamp = new Date('2025-07-09').getTime(); // Will be updated from server or config
|
||||
this.latestContentTimestamp = new Date('2025-10-11').getTime(); // Will be updated from server or config
|
||||
this.isInitialized = false;
|
||||
|
||||
// Default latest content data - could be fetched from server
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
import { translate } from '../utils/i18nHelpers.js';
|
||||
import { formatFileSize } from '../utils/formatters.js';
|
||||
|
||||
// Loading management
|
||||
export class LoadingManager {
|
||||
constructor() {
|
||||
@@ -65,7 +68,7 @@ export class LoadingManager {
|
||||
|
||||
// Show enhanced progress for downloads
|
||||
showDownloadProgress(totalItems = 1) {
|
||||
this.show('Preparing download...', 0);
|
||||
this.show(translate('modals.download.status.preparing', {}, 'Preparing download...'), 0);
|
||||
|
||||
// Create details container
|
||||
const detailsContainer = this.createDetailsContainer();
|
||||
@@ -76,7 +79,7 @@ export class LoadingManager {
|
||||
|
||||
const currentItemLabel = document.createElement('div');
|
||||
currentItemLabel.className = 'current-item-label';
|
||||
currentItemLabel.textContent = 'Current file:';
|
||||
currentItemLabel.textContent = translate('modals.download.progress.currentFile', {}, 'Current file:');
|
||||
|
||||
const currentItemBar = document.createElement('div');
|
||||
currentItemBar.className = 'current-item-bar-container';
|
||||
@@ -105,16 +108,96 @@ export class LoadingManager {
|
||||
|
||||
// Add current item progress to container
|
||||
detailsContainer.appendChild(currentItemContainer);
|
||||
|
||||
// Create transfer stats container
|
||||
const transferStats = document.createElement('div');
|
||||
transferStats.className = 'download-transfer-stats';
|
||||
|
||||
const bytesDetail = document.createElement('div');
|
||||
bytesDetail.className = 'download-transfer-bytes';
|
||||
bytesDetail.textContent = translate(
|
||||
'modals.download.progress.transferredUnknown',
|
||||
{},
|
||||
'Transferred: --'
|
||||
);
|
||||
|
||||
const speedDetail = document.createElement('div');
|
||||
speedDetail.className = 'download-transfer-speed';
|
||||
speedDetail.textContent = translate(
|
||||
'modals.download.progress.speed',
|
||||
{ speed: '--' },
|
||||
'Speed: --'
|
||||
);
|
||||
|
||||
transferStats.appendChild(bytesDetail);
|
||||
transferStats.appendChild(speedDetail);
|
||||
detailsContainer.appendChild(transferStats);
|
||||
|
||||
const formatMetricSize = (value) => {
|
||||
if (value === undefined || value === null || isNaN(value)) {
|
||||
return '--';
|
||||
}
|
||||
if (value < 1) {
|
||||
return '0 B';
|
||||
}
|
||||
return formatFileSize(value);
|
||||
};
|
||||
|
||||
const updateTransferStats = (metrics = {}) => {
|
||||
const { bytesDownloaded, totalBytes, bytesPerSecond } = metrics;
|
||||
|
||||
if (bytesDetail) {
|
||||
const formattedDownloaded = formatMetricSize(bytesDownloaded);
|
||||
const formattedTotal = formatMetricSize(totalBytes);
|
||||
|
||||
if (formattedDownloaded === '--' && formattedTotal === '--') {
|
||||
bytesDetail.textContent = translate(
|
||||
'modals.download.progress.transferredUnknown',
|
||||
{},
|
||||
'Transferred: --'
|
||||
);
|
||||
} else if (formattedTotal === '--') {
|
||||
bytesDetail.textContent = translate(
|
||||
'modals.download.progress.transferredSimple',
|
||||
{ downloaded: formattedDownloaded },
|
||||
`Transferred: ${formattedDownloaded}`
|
||||
);
|
||||
} else {
|
||||
bytesDetail.textContent = translate(
|
||||
'modals.download.progress.transferred',
|
||||
{ downloaded: formattedDownloaded, total: formattedTotal },
|
||||
`Transferred: ${formattedDownloaded} / ${formattedTotal}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (speedDetail) {
|
||||
const formattedSpeed = formatMetricSize(bytesPerSecond);
|
||||
const displaySpeed = formattedSpeed === '--' ? '--' : `${formattedSpeed}/s`;
|
||||
speedDetail.textContent = translate(
|
||||
'modals.download.progress.speed',
|
||||
{ speed: displaySpeed },
|
||||
`Speed: ${displaySpeed}`
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
// Initialize transfer stats with empty data
|
||||
updateTransferStats();
|
||||
|
||||
// Return update function
|
||||
return (currentProgress, currentIndex = 0, currentName = '') => {
|
||||
return (currentProgress, currentIndex = 0, currentName = '', metrics = {}) => {
|
||||
// Update current item progress
|
||||
currentItemProgress.style.width = `${currentProgress}%`;
|
||||
currentItemPercent.textContent = `${Math.floor(currentProgress)}%`;
|
||||
|
||||
// Update current item label if name provided
|
||||
if (currentName) {
|
||||
currentItemLabel.textContent = `Downloading: ${currentName}`;
|
||||
currentItemLabel.textContent = translate(
|
||||
'modals.download.progress.downloading',
|
||||
{ name: currentName },
|
||||
`Downloading: ${currentName}`
|
||||
);
|
||||
}
|
||||
|
||||
// Update overall label if multiple items
|
||||
@@ -128,6 +211,8 @@ export class LoadingManager {
|
||||
// Single item, just update main progress
|
||||
this.setProgress(currentProgress);
|
||||
}
|
||||
|
||||
updateTransferStats(metrics);
|
||||
};
|
||||
}
|
||||
|
||||
@@ -176,4 +261,4 @@ export class LoadingManager {
|
||||
restoreProgressBar() {
|
||||
this.progressBar.style.display = 'block';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,10 +2,11 @@ import { modalManager } from './ModalManager.js';
|
||||
import { showToast } from '../utils/uiHelpers.js';
|
||||
import { state, createDefaultSettings } from '../state/index.js';
|
||||
import { resetAndReload } from '../api/modelApiFactory.js';
|
||||
import { DOWNLOAD_PATH_TEMPLATES, MAPPABLE_BASE_MODELS, PATH_TEMPLATE_PLACEHOLDERS, DEFAULT_PATH_TEMPLATES } from '../utils/constants.js';
|
||||
import { DOWNLOAD_PATH_TEMPLATES, MAPPABLE_BASE_MODELS, PATH_TEMPLATE_PLACEHOLDERS, DEFAULT_PATH_TEMPLATES, DEFAULT_PRIORITY_TAG_CONFIG } from '../utils/constants.js';
|
||||
import { translate } from '../utils/i18nHelpers.js';
|
||||
import { i18n } from '../i18n/index.js';
|
||||
import { configureModelCardVideo } from '../components/shared/ModelCard.js';
|
||||
import { validatePriorityTagString, getPriorityTagSuggestionsMap, invalidatePriorityTagSuggestionsCache } from '../utils/priorityTagHelpers.js';
|
||||
|
||||
export class SettingsManager {
|
||||
constructor() {
|
||||
@@ -111,6 +112,17 @@ export class SettingsManager {
|
||||
|
||||
merged.download_path_templates = { ...DEFAULT_PATH_TEMPLATES, ...templates };
|
||||
|
||||
const priorityTags = backendSettings?.priority_tags;
|
||||
const normalizedPriority = { ...DEFAULT_PRIORITY_TAG_CONFIG };
|
||||
if (priorityTags && typeof priorityTags === 'object' && !Array.isArray(priorityTags)) {
|
||||
Object.entries(priorityTags).forEach(([modelType, configValue]) => {
|
||||
if (typeof configValue === 'string') {
|
||||
normalizedPriority[modelType] = configValue.trim();
|
||||
}
|
||||
});
|
||||
}
|
||||
merged.priority_tags = normalizedPriority;
|
||||
|
||||
Object.keys(merged).forEach(key => this.backendSettingKeys.add(key));
|
||||
|
||||
return merged;
|
||||
@@ -185,7 +197,7 @@ export class SettingsManager {
|
||||
button.addEventListener('click', () => this.toggleInputVisibility(button));
|
||||
});
|
||||
|
||||
const openSettingsLocationButton = document.querySelector('.settings-open-location-button');
|
||||
const openSettingsLocationButton = document.querySelector('.settings-open-location-trigger');
|
||||
if (openSettingsLocationButton) {
|
||||
openSettingsLocationButton.addEventListener('click', () => {
|
||||
const filePath = openSettingsLocationButton.dataset.settingsPath;
|
||||
@@ -201,14 +213,14 @@ export class SettingsManager {
|
||||
settingsManager.validateTemplate(modelType, template);
|
||||
settingsManager.updateTemplatePreview(modelType, template);
|
||||
});
|
||||
|
||||
|
||||
customInput.addEventListener('blur', (e) => {
|
||||
const template = e.target.value;
|
||||
if (settingsManager.validateTemplate(modelType, template)) {
|
||||
settingsManager.updateTemplate(modelType, template);
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
customInput.addEventListener('keydown', (e) => {
|
||||
if (e.key === 'Enter') {
|
||||
e.target.blur();
|
||||
@@ -216,7 +228,9 @@ export class SettingsManager {
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
this.setupPriorityTagInputs();
|
||||
|
||||
this.initialized = true;
|
||||
}
|
||||
|
||||
@@ -276,6 +290,12 @@ export class SettingsManager {
|
||||
cardInfoDisplaySelect.value = state.global.settings.card_info_display || 'always';
|
||||
}
|
||||
|
||||
// Set model name display setting
|
||||
const modelNameDisplaySelect = document.getElementById('modelNameDisplay');
|
||||
if (modelNameDisplaySelect) {
|
||||
modelNameDisplaySelect.value = state.global.settings.model_name_display || 'model_name';
|
||||
}
|
||||
|
||||
// Set optimize example images setting
|
||||
const optimizeExampleImagesCheckbox = document.getElementById('optimizeExampleImages');
|
||||
if (optimizeExampleImagesCheckbox) {
|
||||
@@ -291,6 +311,9 @@ export class SettingsManager {
|
||||
// Load download path templates
|
||||
this.loadDownloadPathTemplates();
|
||||
|
||||
// Load priority tag settings
|
||||
this.loadPriorityTagSettings();
|
||||
|
||||
// Set include trigger words setting
|
||||
const includeTriggerWordsCheckbox = document.getElementById('includeTriggerWords');
|
||||
if (includeTriggerWordsCheckbox) {
|
||||
@@ -325,6 +348,145 @@ export class SettingsManager {
|
||||
this.loadProxySettings();
|
||||
}
|
||||
|
||||
setupPriorityTagInputs() {
|
||||
['lora', 'checkpoint', 'embedding'].forEach((modelType) => {
|
||||
const textarea = document.getElementById(`${modelType}PriorityTagsInput`);
|
||||
if (!textarea) {
|
||||
return;
|
||||
}
|
||||
|
||||
textarea.addEventListener('input', () => this.handlePriorityTagInput(modelType));
|
||||
textarea.addEventListener('blur', () => this.handlePriorityTagSave(modelType));
|
||||
textarea.addEventListener('keydown', (event) => this.handlePriorityTagKeyDown(event, modelType));
|
||||
});
|
||||
}
|
||||
|
||||
loadPriorityTagSettings() {
|
||||
const priorityConfig = state.global.settings.priority_tags || {};
|
||||
['lora', 'checkpoint', 'embedding'].forEach((modelType) => {
|
||||
const textarea = document.getElementById(`${modelType}PriorityTagsInput`);
|
||||
if (!textarea) {
|
||||
return;
|
||||
}
|
||||
|
||||
const storedValue = priorityConfig[modelType] ?? DEFAULT_PRIORITY_TAG_CONFIG[modelType] ?? '';
|
||||
textarea.value = storedValue;
|
||||
this.displayPriorityTagValidation(modelType, true, []);
|
||||
});
|
||||
}
|
||||
|
||||
handlePriorityTagInput(modelType) {
|
||||
const textarea = document.getElementById(`${modelType}PriorityTagsInput`);
|
||||
if (!textarea) {
|
||||
return;
|
||||
}
|
||||
|
||||
const validation = validatePriorityTagString(textarea.value);
|
||||
this.displayPriorityTagValidation(modelType, validation.valid, validation.errors);
|
||||
}
|
||||
|
||||
handlePriorityTagKeyDown(event, modelType) {
|
||||
if (event.key !== 'Enter') {
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.shiftKey) {
|
||||
return;
|
||||
}
|
||||
|
||||
event.preventDefault();
|
||||
this.handlePriorityTagSave(modelType);
|
||||
}
|
||||
|
||||
async handlePriorityTagSave(modelType) {
|
||||
const textarea = document.getElementById(`${modelType}PriorityTagsInput`);
|
||||
if (!textarea) {
|
||||
return;
|
||||
}
|
||||
|
||||
const validation = validatePriorityTagString(textarea.value);
|
||||
if (!validation.valid) {
|
||||
this.displayPriorityTagValidation(modelType, false, validation.errors);
|
||||
return;
|
||||
}
|
||||
|
||||
const sanitized = validation.formatted;
|
||||
const currentValue = state.global.settings.priority_tags?.[modelType] || '';
|
||||
this.displayPriorityTagValidation(modelType, true, []);
|
||||
|
||||
if (sanitized === currentValue) {
|
||||
textarea.value = sanitized;
|
||||
return;
|
||||
}
|
||||
|
||||
const updatedConfig = {
|
||||
...state.global.settings.priority_tags,
|
||||
[modelType]: sanitized,
|
||||
};
|
||||
|
||||
try {
|
||||
textarea.value = sanitized;
|
||||
await this.saveSetting('priority_tags', updatedConfig);
|
||||
showToast('settings.priorityTags.saveSuccess', {}, 'success');
|
||||
await this.refreshPriorityTagSuggestions();
|
||||
} catch (error) {
|
||||
console.error('Failed to save priority tag configuration:', error);
|
||||
showToast('settings.priorityTags.saveError', {}, 'error');
|
||||
}
|
||||
}
|
||||
|
||||
displayPriorityTagValidation(modelType, isValid, errors = []) {
|
||||
const textarea = document.getElementById(`${modelType}PriorityTagsInput`);
|
||||
const errorElement = document.getElementById(`${modelType}PriorityTagsError`);
|
||||
if (!textarea) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (isValid || errors.length === 0) {
|
||||
textarea.classList.remove('settings-input-error');
|
||||
if (errorElement) {
|
||||
errorElement.textContent = '';
|
||||
errorElement.style.display = 'none';
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
textarea.classList.add('settings-input-error');
|
||||
if (errorElement) {
|
||||
const message = this.getPriorityTagErrorMessage(errors[0]);
|
||||
errorElement.textContent = message;
|
||||
errorElement.style.display = 'block';
|
||||
}
|
||||
}
|
||||
|
||||
getPriorityTagErrorMessage(error) {
|
||||
if (!error) {
|
||||
return '';
|
||||
}
|
||||
|
||||
const entryIndex = error.index ?? 0;
|
||||
switch (error.type) {
|
||||
case 'missingClosingParen':
|
||||
return translate('settings.priorityTags.validation.missingClosingParen', { index: entryIndex }, `Entry ${entryIndex} is missing a closing parenthesis.`);
|
||||
case 'missingCanonical':
|
||||
return translate('settings.priorityTags.validation.missingCanonical', { index: entryIndex }, `Entry ${entryIndex} must include a canonical tag.`);
|
||||
case 'duplicateCanonical':
|
||||
return translate('settings.priorityTags.validation.duplicateCanonical', { tag: error.canonical }, `The canonical tag "${error.canonical}" is duplicated.`);
|
||||
default:
|
||||
return translate('settings.priorityTags.validation.unknown', {}, 'Invalid priority tag configuration.');
|
||||
}
|
||||
}
|
||||
|
||||
async refreshPriorityTagSuggestions() {
|
||||
invalidatePriorityTagSuggestionsCache();
|
||||
try {
|
||||
await getPriorityTagSuggestionsMap();
|
||||
window.dispatchEvent(new CustomEvent('lm:priority-tags-updated'));
|
||||
} catch (error) {
|
||||
console.warn('Failed to refresh priority tag suggestions:', error);
|
||||
}
|
||||
}
|
||||
|
||||
loadProxySettings() {
|
||||
// Load proxy enabled setting
|
||||
const proxyEnabledCheckbox = document.getElementById('proxyEnabled');
|
||||
@@ -1055,6 +1217,10 @@ export class SettingsManager {
|
||||
}
|
||||
|
||||
showToast('toast.settings.settingsUpdated', { setting: settingKey.replace(/_/g, ' ') }, 'success');
|
||||
|
||||
if (settingKey === 'model_name_display') {
|
||||
this.reloadContent();
|
||||
}
|
||||
} catch (error) {
|
||||
showToast('toast.settings.settingSaveFailed', { message: error.message }, 'error');
|
||||
}
|
||||
|
||||
@@ -164,7 +164,13 @@ export class DownloadManager {
|
||||
const loraName = currentLora ? currentLora.name : '';
|
||||
|
||||
// Update progress display
|
||||
updateProgress(currentLoraProgress, completedDownloads, loraName);
|
||||
const metrics = {
|
||||
bytesDownloaded: data.bytes_downloaded,
|
||||
totalBytes: data.total_bytes,
|
||||
bytesPerSecond: data.bytes_per_second
|
||||
};
|
||||
|
||||
updateProgress(currentLoraProgress, completedDownloads, loraName, metrics);
|
||||
|
||||
// Add more detailed status messages based on progress
|
||||
if (currentLoraProgress < 3) {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// Create the new hierarchical state structure
|
||||
import { getStorageItem, getMapFromStorage } from '../utils/storageHelpers.js';
|
||||
import { MODEL_TYPES } from '../api/apiConfig.js';
|
||||
import { DEFAULT_PATH_TEMPLATES } from '../utils/constants.js';
|
||||
import { DEFAULT_PATH_TEMPLATES, DEFAULT_PRIORITY_TAG_CONFIG } from '../utils/constants.js';
|
||||
|
||||
const DEFAULT_SETTINGS_BASE = Object.freeze({
|
||||
civitai_api_key: '',
|
||||
@@ -26,8 +26,10 @@ const DEFAULT_SETTINGS_BASE = Object.freeze({
|
||||
autoplay_on_hover: false,
|
||||
display_density: 'default',
|
||||
card_info_display: 'always',
|
||||
model_name_display: 'model_name',
|
||||
include_trigger_words: false,
|
||||
compact_mode: false,
|
||||
priority_tags: { ...DEFAULT_PRIORITY_TAG_CONFIG },
|
||||
});
|
||||
|
||||
export function createDefaultSettings() {
|
||||
@@ -35,6 +37,7 @@ export function createDefaultSettings() {
|
||||
...DEFAULT_SETTINGS_BASE,
|
||||
base_model_path_mappings: {},
|
||||
download_path_templates: { ...DEFAULT_PATH_TEMPLATES },
|
||||
priority_tags: { ...DEFAULT_PRIORITY_TAG_CONFIG },
|
||||
};
|
||||
}
|
||||
|
||||
@@ -67,7 +70,7 @@ export const state = {
|
||||
modelname: true,
|
||||
tags: false,
|
||||
creator: false,
|
||||
recursive: true,
|
||||
recursive: getStorageItem(`${MODEL_TYPES.LORA}_recursiveSearch`, true),
|
||||
},
|
||||
filters: {
|
||||
baseModel: [],
|
||||
@@ -116,7 +119,7 @@ export const state = {
|
||||
filename: true,
|
||||
modelname: true,
|
||||
creator: false,
|
||||
recursive: true,
|
||||
recursive: getStorageItem(`${MODEL_TYPES.CHECKPOINT}_recursiveSearch`, true),
|
||||
},
|
||||
filters: {
|
||||
baseModel: [],
|
||||
@@ -144,7 +147,7 @@ export const state = {
|
||||
modelname: true,
|
||||
tags: false,
|
||||
creator: false,
|
||||
recursive: true,
|
||||
recursive: getStorageItem(`${MODEL_TYPES.EMBEDDING}_recursiveSearch`, true),
|
||||
},
|
||||
filters: {
|
||||
baseModel: [],
|
||||
@@ -261,4 +264,4 @@ export function initPageState(pageType) {
|
||||
return getCurrentPageState();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -194,10 +194,16 @@ export const BASE_MODEL_CATEGORIES = {
|
||||
]
|
||||
};
|
||||
|
||||
// Preset tag suggestions
|
||||
export const PRESET_TAGS = [
|
||||
// Default priority tag entries for fallback suggestions and initial settings
|
||||
export const DEFAULT_PRIORITY_TAG_ENTRIES = [
|
||||
'character', 'concept', 'clothing',
|
||||
'realistic', 'anime', 'toon', 'furry', 'style',
|
||||
'poses', 'background', 'vehicle', 'buildings',
|
||||
'objects', 'animal'
|
||||
'poses', 'background', 'tool', 'vehicle', 'buildings',
|
||||
'objects', 'assets', 'animal', 'action'
|
||||
];
|
||||
|
||||
export const DEFAULT_PRIORITY_TAG_CONFIG = {
|
||||
lora: DEFAULT_PRIORITY_TAG_ENTRIES.join(', '),
|
||||
checkpoint: DEFAULT_PRIORITY_TAG_ENTRIES.join(', '),
|
||||
embedding: DEFAULT_PRIORITY_TAG_ENTRIES.join(', ')
|
||||
};
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
import { modalManager } from '../managers/ModalManager.js';
|
||||
import { getModelApiClient } from '../api/modelApiFactory.js';
|
||||
|
||||
const apiClient = getModelApiClient();
|
||||
|
||||
let pendingDeletePath = null;
|
||||
let pendingExcludePath = null;
|
||||
|
||||
@@ -27,7 +25,7 @@ export async function confirmDelete() {
|
||||
if (!pendingDeletePath) return;
|
||||
|
||||
try {
|
||||
await apiClient.deleteModel(pendingDeletePath);
|
||||
await getModelApiClient().deleteModel(pendingDeletePath);
|
||||
|
||||
closeDeleteModal();
|
||||
|
||||
@@ -72,7 +70,7 @@ export async function confirmExclude() {
|
||||
if (!pendingExcludePath) return;
|
||||
|
||||
try {
|
||||
await apiClient.excludeModel(pendingExcludePath);
|
||||
await getModelApiClient().excludeModel(pendingExcludePath);
|
||||
|
||||
closeExcludeModal();
|
||||
|
||||
@@ -82,4 +80,4 @@ export async function confirmExclude() {
|
||||
} catch (error) {
|
||||
console.error('Error excluding model:', error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user