mirror of
https://github.com/willmiao/ComfyUI-Lora-Manager.git
synced 2026-05-06 16:36:45 -03:00
Compare commits
154 Commits
de3d0571f8
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
908464bc0a | ||
|
|
0ffee3a854 | ||
|
|
8aa9739c44 | ||
|
|
50739bbb43 | ||
|
|
e849303763 | ||
|
|
241b2e15d2 | ||
|
|
88da754504 | ||
|
|
b4a706651f | ||
|
|
ff7cc6d9bb | ||
|
|
454210a47c | ||
|
|
2d7c404ebb | ||
|
|
e23d803ecf | ||
|
|
0cc640cfaa | ||
|
|
2ac0eb0f9d | ||
|
|
f028625ce9 | ||
|
|
06acc7f576 | ||
|
|
d324b57274 | ||
|
|
502b7eab31 | ||
|
|
be75ad930e | ||
|
|
763c4f4dad | ||
|
|
d32c492bdb | ||
|
|
5dcfde36ea | ||
|
|
1d035361a4 | ||
|
|
25605c5e78 | ||
|
|
f3268a6179 | ||
|
|
055e94d77b | ||
|
|
47fcd530a0 | ||
|
|
3c32b9e088 | ||
|
|
ffe0670a27 | ||
|
|
cc147a1795 | ||
|
|
e81409bea4 | ||
|
|
b31fae4e51 | ||
|
|
c6e5467907 | ||
|
|
df0e5797d0 | ||
|
|
ebdbb36271 | ||
|
|
2eef629821 | ||
|
|
658a04736d | ||
|
|
ef7f677933 | ||
|
|
63f0942452 | ||
|
|
a1dff6dd47 | ||
|
|
7fa40023b0 | ||
|
|
3c8acdb65e | ||
|
|
1e9a7812d6 | ||
|
|
37f0e8f213 | ||
|
|
ecf7ea21e4 | ||
|
|
79dd9a1b29 | ||
|
|
ef4923fd94 | ||
|
|
1eeba666f5 | ||
|
|
89e26d9292 | ||
|
|
fc19a145ff | ||
|
|
34f03d6495 | ||
|
|
9443175abc | ||
|
|
dc5072628f | ||
|
|
ff4b8ec849 | ||
|
|
7ab271c752 | ||
|
|
5a7f4dc88b | ||
|
|
761108bfd1 | ||
|
|
24dd3a777c | ||
|
|
1c530ea013 | ||
|
|
0ced53c059 | ||
|
|
67ad68a23f | ||
|
|
d9ec9c512e | ||
|
|
0bcd8e09a9 | ||
|
|
fa049a28c8 | ||
|
|
89fd2b43d6 | ||
|
|
c53f44e7ef | ||
|
|
ae7bfdb517 | ||
|
|
68bf8442eb | ||
|
|
605fbf4117 | ||
|
|
406d5fea6a | ||
|
|
af2146f96c | ||
|
|
bdc8dec860 | ||
|
|
c4fa1631ee | ||
|
|
506d763dc2 | ||
|
|
a2cd09b619 | ||
|
|
cdd77029b6 | ||
|
|
439679e15f | ||
|
|
2640258902 | ||
|
|
b910388d54 | ||
|
|
083de395b1 | ||
|
|
4514ca94b7 | ||
|
|
62247bdd87 | ||
|
|
6d0d9600a7 | ||
|
|
70cd3f4e1b | ||
|
|
a95c518b30 | ||
|
|
ba1800095e | ||
|
|
39c083db79 | ||
|
|
55e9e4bb6f | ||
|
|
0253d001e6 | ||
|
|
9998da3241 | ||
|
|
6666a72775 | ||
|
|
5f1bd894b9 | ||
|
|
1817142a7b | ||
|
|
25fa175aa2 | ||
|
|
39643eb2bc | ||
|
|
4ac78f8aa8 | ||
|
|
0bcca0ba68 | ||
|
|
72f8e0d1be | ||
|
|
85b6c91192 | ||
|
|
908016cbd6 | ||
|
|
a5ac9cf81b | ||
|
|
32875042bd | ||
|
|
51fe7aa07e | ||
|
|
db4726a961 | ||
|
|
e13d70248a | ||
|
|
1c4919a3e8 | ||
|
|
18ddadc9ec | ||
|
|
b6dd6938b0 | ||
|
|
b711ac468a | ||
|
|
727d0ef043 | ||
|
|
9344d86332 | ||
|
|
d36b16c213 | ||
|
|
33a7f07558 | ||
|
|
4f599aeced | ||
|
|
30db8c3d1d | ||
|
|
05636712f0 | ||
|
|
d8e5fe1247 | ||
|
|
3e9210394a | ||
|
|
4dd2c0526f | ||
|
|
9bdb337962 | ||
|
|
f93baf5fc0 | ||
|
|
14cb7fec47 | ||
|
|
f3b3e0adad | ||
|
|
ba3f15dbc6 | ||
|
|
8dc2a2f76b | ||
|
|
316f17dd46 | ||
|
|
3dc10b1404 | ||
|
|
331889d872 | ||
|
|
06f1a82d4c | ||
|
|
267082c712 | ||
|
|
a4cb51e96c | ||
|
|
ca44c367b3 | ||
|
|
301ab14781 | ||
|
|
2626dbab8e | ||
|
|
12bbb0572d | ||
|
|
00f5c1e887 | ||
|
|
89b1675ec7 | ||
|
|
dcc7bd33b5 | ||
|
|
e5152108ba | ||
|
|
1ed5eef985 | ||
|
|
a82f89d14a | ||
|
|
16e30ea689 | ||
|
|
ad3bdddb72 | ||
|
|
9121306b06 | ||
|
|
ca0baf9462 | ||
|
|
20e50156a2 | ||
|
|
0b66bf5479 | ||
|
|
1e8aca4787 | ||
|
|
76ee59cdb9 | ||
|
|
a5191414cc | ||
|
|
5b065b47d4 | ||
|
|
ceeab0c998 | ||
|
|
3b001a6cd8 | ||
|
|
95e5bc26d1 |
69
.agents/skills/lora-manager-runtime-context/SKILL.md
Normal file
69
.agents/skills/lora-manager-runtime-context/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: lora-manager-runtime-context
|
||||
description: Inspect ComfyUI LoRA Manager runtime configuration and local diagnostic state. Use when debugging LoRA Manager issues that require locating or reading settings.json, active library paths, model metadata JSON sidecars, recipe metadata JSON files, example image folders, SQLite caches, symlink maps, download history, aria2 state, or other cache files under the LoRA Manager user config directory.
|
||||
---
|
||||
|
||||
# LoRA Manager Runtime Context
|
||||
|
||||
## Core Rules
|
||||
|
||||
- Treat runtime state as local user data. Prefer read-only inspection unless the user explicitly asks for mutation.
|
||||
- Never print secret-like settings values. Redact keys containing `key`, `token`, `secret`, `password`, `auth`, or `credential`, including `civitai_api_key`.
|
||||
- Resolve paths from the runtime configuration before guessing. In this environment the settings file is normally `/home/miao/.config/ComfyUI-LoRA-Manager/settings.json`, but portable settings can override this through the repository `settings.json`.
|
||||
- Use the active library when selecting per-library caches and paths. Read `active_library` from settings; fall back to `default` if missing.
|
||||
- Normalize and expand `~` before comparing paths. Symlinks are common in this repo.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Use the bundled helper for a safe first pass:
|
||||
|
||||
```bash
|
||||
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py summary
|
||||
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py caches
|
||||
```
|
||||
|
||||
The script redacts sensitive settings, opens SQLite databases read-only, and reports inaccessible or locked databases as warnings.
|
||||
|
||||
For focused checks:
|
||||
|
||||
```bash
|
||||
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py recipes
|
||||
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py model --path /path/to/model.safetensors
|
||||
python .agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py sqlite --db /path/to/cache.sqlite --limit 3
|
||||
```
|
||||
|
||||
## Runtime Path Rules
|
||||
|
||||
- Settings directory: use `py/utils/settings_paths.py`. Default platform path is `platformdirs.user_config_dir("ComfyUI-LoRA-Manager", appauthor=False)`.
|
||||
- Settings file: `<settings_dir>/settings.json`.
|
||||
- Cache root: `<settings_dir>/cache`.
|
||||
- Canonical cache files:
|
||||
- Model cache: `cache/model/<active_library>.sqlite`.
|
||||
- Recipe cache: `cache/recipe/<active_library>.sqlite`.
|
||||
- Model update cache: `cache/model_update/<active_library>.sqlite`.
|
||||
- Recipe FTS: `cache/fts/recipe_fts.sqlite`.
|
||||
- Tag FTS: `cache/fts/tag_fts.sqlite`.
|
||||
- Symlink map: `cache/symlink/symlink_map.json`.
|
||||
- Download history: `cache/download_history/downloaded_versions.sqlite`.
|
||||
- aria2 state: `cache/aria2/downloads.json`.
|
||||
- Legacy cache locations may exist; prefer canonical paths unless diagnosing migrations.
|
||||
|
||||
## Data Location Rules
|
||||
|
||||
- Model roots come from `settings.folder_paths` and the active library payload under `settings.libraries[active_library]`.
|
||||
- Model metadata JSON sidecars live next to the model file as `<model basename>.metadata.json`.
|
||||
- Recipes root is `settings.recipes_path` when it is a non-empty string. If empty, use the first configured LoRA root plus `/recipes`.
|
||||
- Recipe JSON files are named `*.recipe.json` under the recipes root and may be nested in folders.
|
||||
- Example image root is `settings.example_images_path`.
|
||||
- If multiple libraries are configured, example images are stored under `<example_images_path>/<sanitized_library>/<sha256>/`; otherwise they are under `<example_images_path>/<sha256>/`.
|
||||
|
||||
## Useful Cache Tables
|
||||
|
||||
- Model cache: `models`, `model_tags`, `hash_index`, `excluded_models`.
|
||||
- Recipe cache: `recipes`, `cache_metadata`.
|
||||
- Model update cache: `model_update_status`, `model_update_versions`.
|
||||
- Tag FTS cache: `tags`, `fts_metadata`, plus FTS internal tables.
|
||||
- Recipe FTS cache: `recipe_rowid`, `fts_metadata`, plus FTS internal tables.
|
||||
- Download history: `downloaded_model_versions`.
|
||||
|
||||
Prefer querying only counts, schema, and a few sample rows unless the user asks for full output.
|
||||
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "LoRA Manager Runtime Context"
|
||||
short_description: "Inspect LoRA Manager runtime state"
|
||||
default_prompt: "Use $lora-manager-runtime-context to inspect LoRA Manager settings, metadata paths, and caches for debugging."
|
||||
381
.agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py
Executable file
381
.agents/skills/lora-manager-runtime-context/scripts/inspect_runtime_context.py
Executable file
@@ -0,0 +1,381 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import sqlite3
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
SECRET_PATTERN = re.compile(r"(key|token|secret|password|auth|credential)", re.IGNORECASE)
|
||||
APP_NAME = "ComfyUI-LoRA-Manager"
|
||||
CACHE_SQLITE = {
|
||||
"model": ("model", "{library}.sqlite"),
|
||||
"recipe": ("recipe", "{library}.sqlite"),
|
||||
"model_update": ("model_update", "{library}.sqlite"),
|
||||
"recipe_fts": ("fts", "recipe_fts.sqlite"),
|
||||
"tag_fts": ("fts", "tag_fts.sqlite"),
|
||||
"download_history": ("download_history", "downloaded_versions.sqlite"),
|
||||
}
|
||||
CACHE_JSON = {
|
||||
"symlink": ("symlink", "symlink_map.json"),
|
||||
"aria2": ("aria2", "downloads.json"),
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Inspect LoRA Manager runtime state read-only.")
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
subparsers.add_parser("summary", help="Print redacted settings and resolved paths.")
|
||||
subparsers.add_parser("caches", help="Print cache paths and SQLite table summaries.")
|
||||
subparsers.add_parser("recipes", help="Print resolved recipes root and recipe JSON count.")
|
||||
|
||||
model_parser = subparsers.add_parser("model", help="Inspect a model metadata sidecar path.")
|
||||
model_parser.add_argument("--path", required=True, help="Path to a model file or metadata JSON file.")
|
||||
|
||||
sqlite_parser = subparsers.add_parser("sqlite", help="Inspect a SQLite database read-only.")
|
||||
sqlite_parser.add_argument("--db", required=True, help="Path to the SQLite database.")
|
||||
sqlite_parser.add_argument("--limit", type=int, default=3, help="Rows to sample from each user table.")
|
||||
|
||||
args = parser.parse_args()
|
||||
context = build_context()
|
||||
|
||||
if args.command == "summary":
|
||||
print_json(summary_payload(context))
|
||||
elif args.command == "caches":
|
||||
print_json(caches_payload(context))
|
||||
elif args.command == "recipes":
|
||||
print_json(recipes_payload(context))
|
||||
elif args.command == "model":
|
||||
print_json(model_payload(args.path))
|
||||
elif args.command == "sqlite":
|
||||
print_json(sqlite_payload(Path(args.db).expanduser(), args.limit))
|
||||
return 0
|
||||
|
||||
|
||||
def build_context() -> dict[str, Any]:
|
||||
settings_path = resolve_settings_path()
|
||||
settings = load_json(settings_path)
|
||||
settings_dir = settings_path.parent
|
||||
active_library = settings.get("active_library") or "default"
|
||||
safe_library = sanitize_library_name(str(active_library))
|
||||
cache_root = settings_dir / "cache"
|
||||
return {
|
||||
"settings_path": str(settings_path),
|
||||
"settings_dir": str(settings_dir),
|
||||
"settings": settings,
|
||||
"active_library": active_library,
|
||||
"safe_library": safe_library,
|
||||
"cache_root": str(cache_root),
|
||||
"cache_paths": resolve_cache_paths(cache_root, safe_library),
|
||||
}
|
||||
|
||||
|
||||
def resolve_settings_path() -> Path:
|
||||
repo_root = find_repo_root()
|
||||
portable = repo_root / "settings.json"
|
||||
if portable.exists():
|
||||
payload = load_json(portable)
|
||||
if isinstance(payload, dict) and payload.get("use_portable_settings") is True:
|
||||
return portable
|
||||
|
||||
config_home = os.environ.get("XDG_CONFIG_HOME")
|
||||
if config_home:
|
||||
return Path(config_home).expanduser() / APP_NAME / "settings.json"
|
||||
return Path.home() / ".config" / APP_NAME / "settings.json"
|
||||
|
||||
|
||||
def find_repo_root() -> Path:
|
||||
current = Path(__file__).resolve()
|
||||
for parent in current.parents:
|
||||
if (parent / "py").is_dir() and (parent / "standalone.py").exists():
|
||||
return parent
|
||||
return Path.cwd()
|
||||
|
||||
|
||||
def load_json(path: Path) -> dict[str, Any]:
|
||||
try:
|
||||
with path.open("r", encoding="utf-8") as handle:
|
||||
payload = json.load(handle)
|
||||
except FileNotFoundError:
|
||||
return {}
|
||||
except json.JSONDecodeError as exc:
|
||||
return {"_error": f"invalid JSON: {exc}"}
|
||||
except OSError as exc:
|
||||
return {"_error": f"unreadable: {exc}"}
|
||||
return payload if isinstance(payload, dict) else {"_error": "JSON root is not an object"}
|
||||
|
||||
|
||||
def resolve_cache_paths(cache_root: Path, library: str) -> dict[str, str]:
|
||||
paths: dict[str, str] = {}
|
||||
for name, (subdir, filename) in CACHE_SQLITE.items():
|
||||
paths[name] = str(cache_root / subdir / filename.format(library=library))
|
||||
for name, (subdir, filename) in CACHE_JSON.items():
|
||||
paths[name] = str(cache_root / subdir / filename)
|
||||
return paths
|
||||
|
||||
|
||||
def summary_payload(context: dict[str, Any]) -> dict[str, Any]:
|
||||
settings = context["settings"]
|
||||
return {
|
||||
"settings_path": context["settings_path"],
|
||||
"settings_dir": context["settings_dir"],
|
||||
"active_library": context["active_library"],
|
||||
"settings": redact(settings),
|
||||
"model_roots": model_roots(settings, context["active_library"]),
|
||||
"recipes_root": str(resolve_recipes_root(settings, context["active_library"]) or ""),
|
||||
"example_images": example_images_payload(settings, context["active_library"]),
|
||||
"cache_root": context["cache_root"],
|
||||
"cache_paths": context["cache_paths"],
|
||||
}
|
||||
|
||||
|
||||
def caches_payload(context: dict[str, Any]) -> dict[str, Any]:
|
||||
caches: dict[str, Any] = {}
|
||||
for name, path_string in context["cache_paths"].items():
|
||||
path = Path(path_string)
|
||||
item: dict[str, Any] = {
|
||||
"path": str(path),
|
||||
"exists": path.exists(),
|
||||
"size": path.stat().st_size if path.exists() else None,
|
||||
}
|
||||
if path.suffix == ".sqlite":
|
||||
item["sqlite"] = sqlite_payload(path, limit=0)
|
||||
elif path.suffix == ".json":
|
||||
item["json"] = json_file_summary(path)
|
||||
caches[name] = item
|
||||
return {"active_library": context["active_library"], "caches": caches}
|
||||
|
||||
|
||||
def recipes_payload(context: dict[str, Any]) -> dict[str, Any]:
|
||||
root = resolve_recipes_root(context["settings"], context["active_library"])
|
||||
files: list[str] = []
|
||||
if root and root.exists():
|
||||
files = [str(path) for path in sorted(root.rglob("*.recipe.json"))[:20]]
|
||||
return {
|
||||
"recipes_root": str(root or ""),
|
||||
"exists": bool(root and root.exists()),
|
||||
"recipe_json_count": count_recipe_files(root),
|
||||
"sample_recipe_json": files,
|
||||
"recipe_cache": context["cache_paths"].get("recipe"),
|
||||
}
|
||||
|
||||
|
||||
def model_payload(raw_path: str) -> dict[str, Any]:
|
||||
path = Path(raw_path).expanduser()
|
||||
metadata_path = path if path.name.endswith(".metadata.json") else path.with_suffix(".metadata.json")
|
||||
payload = {
|
||||
"input_path": str(path),
|
||||
"metadata_path": str(metadata_path),
|
||||
"model_exists": path.exists(),
|
||||
"metadata_exists": metadata_path.exists(),
|
||||
}
|
||||
if metadata_path.exists():
|
||||
data = load_json(metadata_path)
|
||||
payload["metadata_summary"] = redact(summarize_value(data))
|
||||
return payload
|
||||
|
||||
|
||||
def sqlite_payload(path: Path, limit: int = 3, allow_copy: bool = True) -> dict[str, Any]:
|
||||
result: dict[str, Any] = {"path": str(path), "exists": path.exists(), "tables": {}}
|
||||
if not path.exists():
|
||||
return result
|
||||
try:
|
||||
conn = connect_sqlite_readonly(path)
|
||||
except sqlite3.Error as exc:
|
||||
result["error"] = str(exc)
|
||||
return result
|
||||
try:
|
||||
table_rows = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||||
).fetchall()
|
||||
for table_row in table_rows:
|
||||
table = table_row["name"]
|
||||
columns = [
|
||||
row["name"]
|
||||
for row in conn.execute(f"PRAGMA table_info({quote_identifier(table)})").fetchall()
|
||||
]
|
||||
table_info: dict[str, Any] = {"columns": columns}
|
||||
try:
|
||||
table_info["count"] = conn.execute(
|
||||
f"SELECT COUNT(*) FROM {quote_identifier(table)}"
|
||||
).fetchone()[0]
|
||||
except sqlite3.Error as exc:
|
||||
table_info["count_error"] = str(exc)
|
||||
if limit > 0 and columns and not is_internal_sqlite_table(table):
|
||||
try:
|
||||
rows = conn.execute(
|
||||
f"SELECT * FROM {quote_identifier(table)} LIMIT ?", (limit,)
|
||||
).fetchall()
|
||||
table_info["sample"] = [redact(dict(row)) for row in rows]
|
||||
except sqlite3.Error as exc:
|
||||
table_info["sample_error"] = str(exc)
|
||||
result["tables"][table] = table_info
|
||||
except sqlite3.Error as exc:
|
||||
fallback = sqlite_copy_payload(path, limit, str(exc)) if allow_copy else None
|
||||
if fallback is not None:
|
||||
result.update(fallback)
|
||||
else:
|
||||
result["error"] = str(exc)
|
||||
finally:
|
||||
conn.close()
|
||||
return result
|
||||
|
||||
|
||||
def connect_sqlite_readonly(path: Path) -> sqlite3.Connection:
|
||||
errors: list[str] = []
|
||||
for query in ("mode=ro", "mode=ro&immutable=1"):
|
||||
try:
|
||||
conn = sqlite3.connect(f"file:{path}?{query}", uri=True)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
except sqlite3.Error as exc:
|
||||
errors.append(f"{query}: {exc}")
|
||||
raise sqlite3.OperationalError("; ".join(errors))
|
||||
|
||||
|
||||
def sqlite_copy_payload(path: Path, limit: int, original_error: str) -> dict[str, Any] | None:
|
||||
try:
|
||||
with tempfile.TemporaryDirectory(prefix="lm-cache-inspect-") as temp_dir:
|
||||
copy_path = Path(temp_dir) / path.name
|
||||
shutil.copy2(path, copy_path)
|
||||
payload = sqlite_payload(copy_path, limit, allow_copy=False)
|
||||
payload["path"] = str(path)
|
||||
payload["inspected_copy"] = True
|
||||
payload["original_error"] = original_error
|
||||
return payload
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def json_file_summary(path: Path) -> dict[str, Any]:
|
||||
if not path.exists():
|
||||
return {"exists": False}
|
||||
data = load_json(path)
|
||||
return {"exists": True, "summary": redact(summarize_value(data))}
|
||||
|
||||
|
||||
def model_roots(settings: dict[str, Any], active_library: str) -> dict[str, list[str]]:
|
||||
roots: dict[str, list[str]] = {}
|
||||
sources = [settings]
|
||||
library = settings.get("libraries", {}).get(active_library)
|
||||
if isinstance(library, dict):
|
||||
sources.insert(0, library)
|
||||
for source in sources:
|
||||
folder_paths = source.get("folder_paths")
|
||||
if isinstance(folder_paths, dict):
|
||||
for key, value in folder_paths.items():
|
||||
roots.setdefault(key, []).extend(normalize_path_list(value))
|
||||
for default_key, folder_key in (
|
||||
("default_lora_root", "loras"),
|
||||
("default_checkpoint_root", "checkpoints"),
|
||||
("default_embedding_root", "embeddings"),
|
||||
("default_unet_root", "unet"),
|
||||
):
|
||||
value = settings.get(default_key)
|
||||
if isinstance(value, str) and value:
|
||||
roots.setdefault(folder_key, []).append(expand_path(value))
|
||||
return {key: dedupe(values) for key, values in roots.items()}
|
||||
|
||||
|
||||
def resolve_recipes_root(settings: dict[str, Any], active_library: str) -> Path | None:
|
||||
recipes_path = settings.get("recipes_path")
|
||||
library = settings.get("libraries", {}).get(active_library)
|
||||
if isinstance(library, dict) and isinstance(library.get("recipes_path"), str):
|
||||
recipes_path = library["recipes_path"] or recipes_path
|
||||
if isinstance(recipes_path, str) and recipes_path.strip():
|
||||
return Path(expand_path(recipes_path.strip()))
|
||||
lora_roots = model_roots(settings, active_library).get("loras") or []
|
||||
return Path(lora_roots[0]) / "recipes" if lora_roots else None
|
||||
|
||||
|
||||
def example_images_payload(settings: dict[str, Any], active_library: str) -> dict[str, Any]:
|
||||
root = settings.get("example_images_path") or ""
|
||||
libraries = settings.get("libraries")
|
||||
library_count = len(libraries) if isinstance(libraries, dict) else 0
|
||||
scoped = library_count > 1
|
||||
root_path = Path(expand_path(root)) if isinstance(root, str) and root else None
|
||||
library_root = root_path / sanitize_library_name(active_library) if root_path and scoped else root_path
|
||||
return {
|
||||
"root": str(root_path or ""),
|
||||
"uses_library_scoped_folders": scoped,
|
||||
"library_root": str(library_root or ""),
|
||||
}
|
||||
|
||||
|
||||
def count_recipe_files(root: Path | None) -> int:
|
||||
if not root or not root.exists():
|
||||
return 0
|
||||
return sum(1 for _ in root.rglob("*.recipe.json"))
|
||||
|
||||
|
||||
def normalize_path_list(value: Any) -> list[str]:
|
||||
if isinstance(value, str):
|
||||
return [expand_path(value)] if value else []
|
||||
if isinstance(value, list):
|
||||
return [expand_path(item) for item in value if isinstance(item, str) and item]
|
||||
return []
|
||||
|
||||
|
||||
def expand_path(value: str) -> str:
|
||||
return str(Path(value).expanduser().resolve(strict=False))
|
||||
|
||||
|
||||
def sanitize_library_name(name: str) -> str:
|
||||
safe = re.sub(r"[^A-Za-z0-9_.-]", "_", name or "default")
|
||||
return safe or "default"
|
||||
|
||||
|
||||
def dedupe(values: list[str]) -> list[str]:
|
||||
seen: set[str] = set()
|
||||
result: list[str] = []
|
||||
for value in values:
|
||||
if value not in seen:
|
||||
result.append(value)
|
||||
seen.add(value)
|
||||
return result
|
||||
|
||||
|
||||
def redact(value: Any, key: str = "") -> Any:
|
||||
if key and SECRET_PATTERN.search(key):
|
||||
return "<redacted>"
|
||||
if isinstance(value, dict):
|
||||
return {str(k): redact(v, str(k)) for k, v in value.items()}
|
||||
if isinstance(value, list):
|
||||
return [redact(item) for item in value]
|
||||
return value
|
||||
|
||||
|
||||
def summarize_value(value: Any) -> Any:
|
||||
if isinstance(value, dict):
|
||||
return {key: summarize_value(item) for key, item in value.items()}
|
||||
if isinstance(value, list):
|
||||
return {
|
||||
"type": "array",
|
||||
"length": len(value),
|
||||
"first": summarize_value(value[0]) if value else None,
|
||||
}
|
||||
return value
|
||||
|
||||
|
||||
def quote_identifier(identifier: str) -> str:
|
||||
return '"' + identifier.replace('"', '""') + '"'
|
||||
|
||||
|
||||
def is_internal_sqlite_table(table: str) -> bool:
|
||||
return table.startswith("sqlite_") or table.endswith(("_data", "_idx", "_docsize", "_config", "_content"))
|
||||
|
||||
|
||||
def print_json(payload: Any) -> None:
|
||||
json.dump(payload, sys.stdout, indent=2, ensure_ascii=False)
|
||||
sys.stdout.write("\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -15,6 +15,7 @@ model_cache/
|
||||
# agent
|
||||
.opencode/
|
||||
.claude/
|
||||
.codex
|
||||
|
||||
# Vue widgets development cache (but keep build output)
|
||||
vue-widgets/node_modules/
|
||||
|
||||
@@ -138,6 +138,13 @@ npm run test:coverage # Generate coverage report
|
||||
- Run `python scripts/sync_translation_keys.py` after adding UI strings to `locales/en.json`
|
||||
- Symlinks require normalized paths
|
||||
|
||||
## Git / Commit Messages
|
||||
|
||||
- Follow the style of recent repository commits when writing commit messages
|
||||
- Prefer the repo's existing `feat(...)`, `fix(...)`, `chore:` style where applicable
|
||||
- If the user has provided a GitHub issue link or issue ID for the task, mention that issue in the commit message, for example `(#871)`
|
||||
- When unrelated local changes exist, stage and commit only the files relevant to the requested task
|
||||
|
||||
## Frontend UI Architecture
|
||||
|
||||
### 1. Standalone Web UI
|
||||
|
||||
@@ -7,6 +7,7 @@ try: # pragma: no cover - import fallback for pytest collection
|
||||
from .py.nodes.prompt import PromptLM
|
||||
from .py.nodes.text import TextLM
|
||||
from .py.nodes.lora_stacker import LoraStackerLM
|
||||
from .py.nodes.lora_stack_combiner import LoraStackCombinerLM
|
||||
from .py.nodes.save_image import SaveImageLM
|
||||
from .py.nodes.debug_metadata import DebugMetadataLM
|
||||
from .py.nodes.wanvideo_lora_select import WanVideoLoraSelectLM
|
||||
@@ -39,6 +40,9 @@ except (
|
||||
"py.nodes.trigger_word_toggle"
|
||||
).TriggerWordToggleLM
|
||||
LoraStackerLM = importlib.import_module("py.nodes.lora_stacker").LoraStackerLM
|
||||
LoraStackCombinerLM = importlib.import_module(
|
||||
"py.nodes.lora_stack_combiner"
|
||||
).LoraStackCombinerLM
|
||||
SaveImageLM = importlib.import_module("py.nodes.save_image").SaveImageLM
|
||||
DebugMetadataLM = importlib.import_module("py.nodes.debug_metadata").DebugMetadataLM
|
||||
WanVideoLoraSelectLM = importlib.import_module(
|
||||
@@ -63,6 +67,7 @@ NODE_CLASS_MAPPINGS = {
|
||||
UNETLoaderLM.NAME: UNETLoaderLM,
|
||||
TriggerWordToggleLM.NAME: TriggerWordToggleLM,
|
||||
LoraStackerLM.NAME: LoraStackerLM,
|
||||
LoraStackCombinerLM.NAME: LoraStackCombinerLM,
|
||||
SaveImageLM.NAME: SaveImageLM,
|
||||
DebugMetadataLM.NAME: DebugMetadataLM,
|
||||
WanVideoLoraSelectLM.NAME: WanVideoLoraSelectLM,
|
||||
|
||||
@@ -9,38 +9,42 @@
|
||||
"Insomnia Art Designs",
|
||||
"megakirbs",
|
||||
"Brennok",
|
||||
"wackop",
|
||||
"2018cfh",
|
||||
"Takkan",
|
||||
"W+K+White",
|
||||
"wackop",
|
||||
"Phil",
|
||||
"Carl G.",
|
||||
"Arlecchino Shion",
|
||||
"stone9k",
|
||||
"$MetaSamsara",
|
||||
"itismyelement",
|
||||
"Gingko Biloba",
|
||||
"onesecondinosaur",
|
||||
"Carl G.",
|
||||
"Takkan",
|
||||
"Charles Blakemore",
|
||||
"Rob Williams",
|
||||
"Rosenthal",
|
||||
"Francisco Tatis",
|
||||
"Tobi_Swagg",
|
||||
"Andrew Wilson",
|
||||
"Greybush",
|
||||
"Gooohokrbe",
|
||||
"Ricky Carter",
|
||||
"JongWon Han",
|
||||
"OldBones",
|
||||
"VantAI",
|
||||
"runte3221",
|
||||
"Illrigger",
|
||||
"FreelancerZ",
|
||||
"Julian V",
|
||||
"Edgar Tejeda",
|
||||
"Birdy",
|
||||
"Jorge Hussni",
|
||||
"Liam MacDougal",
|
||||
"Fraser Cross",
|
||||
"Polymorphic Indeterminate",
|
||||
"Marc Whiffen",
|
||||
"Kiba",
|
||||
"Jorge Hussni",
|
||||
"Reno Lam",
|
||||
"Birdy",
|
||||
"Skalabananen",
|
||||
"esthe",
|
||||
"Kiba",
|
||||
"Reno Lam",
|
||||
"Mozzel",
|
||||
"sig",
|
||||
"Christian Byrne",
|
||||
"DM",
|
||||
@@ -48,13 +52,14 @@
|
||||
"Estragon",
|
||||
"J\\B/ 8r0wns0n",
|
||||
"Snaggwort",
|
||||
"Arlecchino Shion",
|
||||
"ClockDaemon",
|
||||
"Jonathan Ross",
|
||||
"KD",
|
||||
"Omnidex",
|
||||
"Nazono_hito",
|
||||
"Tyler Trebuchon",
|
||||
"Release Cabrakan",
|
||||
"confiscated Zyra",
|
||||
"contrite831",
|
||||
"SG",
|
||||
"carozzz",
|
||||
"James Dooley",
|
||||
@@ -62,77 +67,61 @@
|
||||
"Buzzard",
|
||||
"jmack",
|
||||
"Adam Shaw",
|
||||
"Tee Gee",
|
||||
"Mark Corneglio",
|
||||
"SarcasticHashtag",
|
||||
"Anthony Rizzo",
|
||||
"tarek helmi",
|
||||
"Cosmosis",
|
||||
"iamresist",
|
||||
"Gooohokrbe",
|
||||
"RedrockVP",
|
||||
"Wolffen",
|
||||
"FloPro4Sho",
|
||||
"James Todd",
|
||||
"OldBones",
|
||||
"Steven Pfeiffer",
|
||||
"Tim",
|
||||
"Timmy",
|
||||
"Johnny",
|
||||
"Lisster",
|
||||
"Michael Wong",
|
||||
"Illrigger",
|
||||
"whudunit",
|
||||
"Tom Corrigan",
|
||||
"dl0901dm",
|
||||
"JackieWang",
|
||||
"fnkylove",
|
||||
"Steven Owens",
|
||||
"Yushio",
|
||||
"Vik71it",
|
||||
"lh qwe",
|
||||
"Echo",
|
||||
"Lilleman",
|
||||
"Robert Stacey",
|
||||
"PM",
|
||||
"Todd Keck",
|
||||
"Briton Heilbrun",
|
||||
"Mozzel",
|
||||
"Gingko Biloba",
|
||||
"Felipe dos Santos",
|
||||
"Penfore",
|
||||
"Aleksander Wujczyk",
|
||||
"BadassArabianMofo",
|
||||
"Sterilized",
|
||||
"Pascal Dahle",
|
||||
"Markus",
|
||||
"quarz",
|
||||
"Penfore",
|
||||
"Greg",
|
||||
"Douglas Gaspar",
|
||||
"JSST",
|
||||
"AlexDuKaNa",
|
||||
"George",
|
||||
"lmsupporter",
|
||||
"Phil",
|
||||
"Charles Blakemore",
|
||||
"IamAyam",
|
||||
"zounic",
|
||||
"wfpearl",
|
||||
"Rob Williams",
|
||||
"Baekdoosixt",
|
||||
"Jonathan Ross",
|
||||
"Jack B Nimble",
|
||||
"Nazono_hito",
|
||||
"Melville Parrish",
|
||||
"daniel dove",
|
||||
"Lustre",
|
||||
"JW Sin",
|
||||
"contrite831",
|
||||
"Alex",
|
||||
"bh",
|
||||
"Marlon Daniels",
|
||||
"Starkselle",
|
||||
"Aaron Bleuer",
|
||||
"LacesOut!",
|
||||
"Graham Colehour",
|
||||
"greebles",
|
||||
"Cosmosis",
|
||||
"M Postkasse",
|
||||
"Tomohiro Baba",
|
||||
"David Ortega",
|
||||
"FloPro4Sho",
|
||||
"ASLPro3D",
|
||||
"Jacob Hoehler",
|
||||
"FinalyFree",
|
||||
@@ -145,112 +134,97 @@
|
||||
"Big Red",
|
||||
"Jimmy Ledbetter",
|
||||
"Luc Job",
|
||||
"dl0901dm",
|
||||
"Philip Hempel",
|
||||
"corde",
|
||||
"Nick Walker",
|
||||
"Julian V",
|
||||
"Steven Owens",
|
||||
"Bishoujoker",
|
||||
"conner",
|
||||
"aai",
|
||||
"Yaboi",
|
||||
"Tori",
|
||||
"wildnut",
|
||||
"Princess Bright Eyes",
|
||||
"Damon Cunliffe",
|
||||
"CryptoTraderJK",
|
||||
"Davaitamin",
|
||||
"AbstractAss",
|
||||
"ViperC",
|
||||
"Aleksander Wujczyk",
|
||||
"AM Kuro",
|
||||
"jean jahren",
|
||||
"AM Kuro",
|
||||
"ViperC",
|
||||
"Ran C",
|
||||
"tedcor",
|
||||
"S Sang",
|
||||
"Sangheili460",
|
||||
"MagnaInsomnia",
|
||||
"Akira_HentAI",
|
||||
"Karl P.",
|
||||
"Akira_HentAI",
|
||||
"Gordon Cole",
|
||||
"yuxz69",
|
||||
"MadSpin",
|
||||
"esthe",
|
||||
"andrew.tappan",
|
||||
"dw",
|
||||
"N/A",
|
||||
"The Spawn",
|
||||
"graysock",
|
||||
"Pozadine1",
|
||||
"Greenmoustache",
|
||||
"zounic",
|
||||
"Gamalonia",
|
||||
"fancypants",
|
||||
"Vir",
|
||||
"IamAyam",
|
||||
"Eldithor",
|
||||
"Joboshy",
|
||||
"Digital",
|
||||
"JaxMax",
|
||||
"takyamtom",
|
||||
"Bohemian Corporal",
|
||||
"奚明 刘",
|
||||
"Dan",
|
||||
"Seth Christensen",
|
||||
"confiscated Zyra",
|
||||
"Jwk0205",
|
||||
"Bro Xie",
|
||||
"Draven T",
|
||||
"yer fey",
|
||||
"batblue",
|
||||
"carey6409",
|
||||
"Olive",
|
||||
"太郎 ゲーム",
|
||||
"Tee Gee",
|
||||
"Some Guy Named Barry",
|
||||
"jinxedx",
|
||||
"Aquatic Coffee",
|
||||
"tarek helmi",
|
||||
"Max Marklund",
|
||||
"AELOX",
|
||||
"Dankin",
|
||||
"Nicfit23",
|
||||
"Noora",
|
||||
"ethanfel",
|
||||
"wamekukyouzin",
|
||||
"drum matthieu",
|
||||
"Dogmaster",
|
||||
"Matt Wenzel",
|
||||
"Mattssn",
|
||||
"Frank Nitty",
|
||||
"John Saveas",
|
||||
"Focuschannel",
|
||||
"Pronredn",
|
||||
"Christopher Michel",
|
||||
"Serge Bekenkamp",
|
||||
"DougPeterson",
|
||||
"LeoZero",
|
||||
"Antonio Pontes",
|
||||
"ApathyJones",
|
||||
"nahinahi9",
|
||||
"Anthony Faxlandez",
|
||||
"lh qwe",
|
||||
"Kevin John Duck",
|
||||
"conner",
|
||||
"Dustin Chen",
|
||||
"dan",
|
||||
"Blackfish95",
|
||||
"Mouthlessman",
|
||||
"Steam Steam",
|
||||
"Princess Bright Eyes",
|
||||
"Paul Kroll",
|
||||
"AbstractAss",
|
||||
"otaku fra",
|
||||
"semicolon drainpipe",
|
||||
"Thesharingbrother",
|
||||
"Fotek Design",
|
||||
"Felipe dos Santos",
|
||||
"Bas Imagineer",
|
||||
"Pat Hen",
|
||||
"ResidentDeviant",
|
||||
"Markus",
|
||||
"MiraiKuriyamaSy",
|
||||
"Adam Taylor",
|
||||
"JC",
|
||||
"Douglas Gaspar",
|
||||
"Weird_With_A_Beard",
|
||||
"Prompt Pirate",
|
||||
"Pozadine1",
|
||||
"uwutismxd",
|
||||
"AlexDuKaNa",
|
||||
"George",
|
||||
"dw",
|
||||
"Qarob",
|
||||
"AIGooner",
|
||||
"inbijiburu",
|
||||
"decoy",
|
||||
"Luc",
|
||||
"ProtonPrince",
|
||||
"DiffDuck",
|
||||
"elu3199",
|
||||
"Nick “Loadstone” D",
|
||||
"Hasturkun",
|
||||
"Jon Sandman",
|
||||
"Ubivis",
|
||||
@@ -260,51 +234,43 @@
|
||||
"mr_dinosaur",
|
||||
"Tyrswood",
|
||||
"linnfrey",
|
||||
"zenobeus",
|
||||
"Jackthemind",
|
||||
"Stryker",
|
||||
"Pkrsky",
|
||||
"raf8osz",
|
||||
"blikkies",
|
||||
"奚明 刘",
|
||||
"Josef Lanzl",
|
||||
"Nerezza",
|
||||
"Griffin Dahlberg",
|
||||
"준희 김",
|
||||
"Error_Rule34_Not_found",
|
||||
"Gerald Welly",
|
||||
"Shock Shockor",
|
||||
"Roslynd",
|
||||
"Geolog",
|
||||
"Goldwaters",
|
||||
"Neco28",
|
||||
"Zude",
|
||||
"Tomohiro Baba",
|
||||
"David Ortega",
|
||||
"Noora",
|
||||
"Cristian Vazquez",
|
||||
"Kyler",
|
||||
"Mattssn",
|
||||
"Magic Noob",
|
||||
"aRtFuL_DodGeR",
|
||||
"X",
|
||||
"DougPeterson",
|
||||
"Jeff",
|
||||
"Bruce",
|
||||
"CrimsonDX",
|
||||
"Kevin John Duck",
|
||||
"Kevin Christopher",
|
||||
"Ouro Boros",
|
||||
"DarkSunset",
|
||||
"Chad Idk",
|
||||
"Yaboi",
|
||||
"dd",
|
||||
"Billy Gladky",
|
||||
"Probis",
|
||||
"shrshpp",
|
||||
"Steam Steam",
|
||||
"CryptoTraderJK",
|
||||
"Davaitamin",
|
||||
"Dušan Ryban",
|
||||
"ItsGeneralButtNaked",
|
||||
"tedcor",
|
||||
"Fotek Design",
|
||||
"sjon kreutz",
|
||||
"Nimess",
|
||||
"John Statham",
|
||||
"Youguang",
|
||||
"Nihongasuki",
|
||||
"MadSpin",
|
||||
"Metryman55",
|
||||
"andrewzpong",
|
||||
"FrxzenSnxw",
|
||||
"BossGame",
|
||||
"inbijiburu",
|
||||
"decoy",
|
||||
"Nick “Loadstone” D",
|
||||
"Ray Wing",
|
||||
"Ranzitho",
|
||||
"Gus",
|
||||
@@ -313,10 +279,10 @@
|
||||
"David LaVallee",
|
||||
"ae",
|
||||
"Tr4shP4nda",
|
||||
"Gamalonia",
|
||||
"WRL_SPR",
|
||||
"capn",
|
||||
"Joseph",
|
||||
"lrdchs",
|
||||
"Mirko Katzula",
|
||||
"dan",
|
||||
"Piccio08",
|
||||
@@ -327,13 +293,14 @@
|
||||
"몽타주",
|
||||
"Kland",
|
||||
"Hailshem",
|
||||
"ryoma",
|
||||
"John Martin",
|
||||
"Chris",
|
||||
"kudari",
|
||||
"Naomi Hale Danchi",
|
||||
"dc7431",
|
||||
"Vir",
|
||||
"Brian M",
|
||||
"Nerezza",
|
||||
"sanborondon",
|
||||
"moranqianlong",
|
||||
"Seth Christensen",
|
||||
"Draven T",
|
||||
"Taylor Funk",
|
||||
"aezin",
|
||||
"Thought2Form",
|
||||
@@ -341,35 +308,192 @@
|
||||
"Kevin Picco",
|
||||
"Erik Lopez",
|
||||
"Mateo Curić",
|
||||
"Haru Yotu",
|
||||
"Aquatic Coffee",
|
||||
"Eris3D",
|
||||
"m",
|
||||
"ethanfel",
|
||||
"Pierce McBride",
|
||||
"Joshua Gray",
|
||||
"Focuschannel",
|
||||
"Mikko Hemilä",
|
||||
"Matura Arbeit",
|
||||
"Jamie Ogletree",
|
||||
"TBitz33",
|
||||
"Emil Bernhoff",
|
||||
"a _",
|
||||
"SendingRavens",
|
||||
"James Coleman",
|
||||
"Martial",
|
||||
"Anthony Faxlandez",
|
||||
"battu",
|
||||
"Emil Andersson",
|
||||
"Chad Idk",
|
||||
"Michael Docherty",
|
||||
"Yuji Kaneko",
|
||||
"elitassj",
|
||||
"Jacob Winter",
|
||||
"Pat Hen",
|
||||
"semicolon drainpipe",
|
||||
"Jordan Shaw",
|
||||
"Sam",
|
||||
"Rops Alot",
|
||||
"Thesharingbrother",
|
||||
"Sam",
|
||||
"Ace Ventura",
|
||||
"ResidentDeviant",
|
||||
"Nihongasuki",
|
||||
"JC",
|
||||
"Prompt Pirate",
|
||||
"uwutismxd",
|
||||
"momokai",
|
||||
"zenobeus",
|
||||
"ken",
|
||||
"epicgamer0020690",
|
||||
"Joshua Porrata",
|
||||
"keemun",
|
||||
"SuBu",
|
||||
"RedPIXel",
|
||||
"Wind",
|
||||
"Jackthemind",
|
||||
"Nexus",
|
||||
"Ramneek“Guy”Ashok",
|
||||
"squid_actually",
|
||||
"Nat_20",
|
||||
"Edward Weeks",
|
||||
"kyoumei",
|
||||
"RadStorm04",
|
||||
"JohnDoe42054",
|
||||
"BillyHill",
|
||||
"emyth",
|
||||
"chriphost",
|
||||
"KitKatM",
|
||||
"ryoma",
|
||||
"socrasteeze",
|
||||
"OrganicArtifact",
|
||||
"Stryker",
|
||||
"MudkipMedkitz",
|
||||
"gzmzmvp",
|
||||
"raf8osz",
|
||||
"ElitaSSJ4",
|
||||
"Richard",
|
||||
"blikkies",
|
||||
"Andrew",
|
||||
"Chris",
|
||||
"Robert Wegemund",
|
||||
"Littlehuggy",
|
||||
"Gregory Kozhemiak",
|
||||
"mrjuan",
|
||||
"Brian Buie",
|
||||
"Shock Shockor",
|
||||
"Sadlip",
|
||||
"Goldwaters",
|
||||
"Eric Whitney",
|
||||
"Joey Callahan",
|
||||
"Zude",
|
||||
"Ivan Tadic",
|
||||
"Mike Simone",
|
||||
"John J Linehan",
|
||||
"Kyler",
|
||||
"Elliot E",
|
||||
"Morgandel",
|
||||
"Theerat Jiramate",
|
||||
"aRtFuL_DodGeR",
|
||||
"Noah",
|
||||
"Jacob McDaniel",
|
||||
"X",
|
||||
"Sloan Steddy",
|
||||
"Temikus",
|
||||
"Artokun",
|
||||
"Michael Taylor",
|
||||
"Derek Baker",
|
||||
"CrimsonDX",
|
||||
"Michael Anthony Scott",
|
||||
"DarkSunset",
|
||||
"Atilla Berke Pekduyar",
|
||||
"Nathan",
|
||||
"Billy Gladky",
|
||||
"NICHOLAS BAXLEY",
|
||||
"Decx _",
|
||||
"Probis",
|
||||
"Ed Wang",
|
||||
"ItsGeneralButtNaked",
|
||||
"Nimess",
|
||||
"SRDB",
|
||||
"g unit",
|
||||
"Ace Ventura",
|
||||
"Distortik",
|
||||
"Youguang",
|
||||
"四糸凜音",
|
||||
"Saya",
|
||||
"andrewzpong",
|
||||
"FrxzenSnxw",
|
||||
"BossGame",
|
||||
"lrdchs",
|
||||
"Tree Tagger",
|
||||
"Inversity",
|
||||
"Crocket",
|
||||
"AIVORY3D",
|
||||
"Kevinj",
|
||||
"Mitchell Robson",
|
||||
"Whitepinetrader",
|
||||
"ResidentDeviant",
|
||||
"deanbrian",
|
||||
"POPPIN",
|
||||
"Alex Wortman",
|
||||
"Cody",
|
||||
"Raku",
|
||||
"smart.edge5178",
|
||||
"InformedViewz",
|
||||
"CHKeeho80",
|
||||
"Bubbafett",
|
||||
"leaf",
|
||||
"Menard",
|
||||
"Skyfire83",
|
||||
"Adam Rinehart",
|
||||
"Pitpe11",
|
||||
"TheD1rtyD03",
|
||||
"moonpetal",
|
||||
"SomeDude",
|
||||
"g9p0o",
|
||||
"TheHolySheep",
|
||||
"Monte Won",
|
||||
"SpringBootisTrash",
|
||||
"carsten",
|
||||
"ikok",
|
||||
"Nathen+Choi",
|
||||
"T",
|
||||
"LarsesFPC",
|
||||
"cocona",
|
||||
"sfasdfasfdsa",
|
||||
"Buecyb99",
|
||||
"Welkor",
|
||||
"David Schenck",
|
||||
"John Martin",
|
||||
"Wolfe7D1",
|
||||
"Ink Temptation",
|
||||
"moranqianlong",
|
||||
"Kalli Core",
|
||||
"elleshar666",
|
||||
"ACTUALLY_the_Real_Willem_Dafoe",
|
||||
"Haru Yotu",
|
||||
"Kauffy",
|
||||
"EpicElric",
|
||||
"Kyron Mahan",
|
||||
"Edward Kennedy",
|
||||
"Justin Blaylock",
|
||||
"Matura Arbeit",
|
||||
"Nick Kage",
|
||||
"TBitz33",
|
||||
"Anonym dkjglfleeoeldldldlkf",
|
||||
"Vane Holzer",
|
||||
"psytrax",
|
||||
"Cyrus Fett",
|
||||
"Ezokewn",
|
||||
"SendingRavens",
|
||||
"hexxish",
|
||||
"notedfakes",
|
||||
"Michael Docherty",
|
||||
"Michael Scott",
|
||||
"Paul Hartsuyker",
|
||||
"elitassj",
|
||||
"Jacob Winter",
|
||||
"Ryan Presley Ng",
|
||||
"Wes Sims",
|
||||
"Donor4115",
|
||||
"Lyavph",
|
||||
"David",
|
||||
"Meilo",
|
||||
"Filippo Ferrari",
|
||||
"Pen Bouryoung",
|
||||
"shinonomeiro",
|
||||
"Snille",
|
||||
@@ -378,97 +502,86 @@
|
||||
"xybrightsummer",
|
||||
"jreedatchison",
|
||||
"PhilW",
|
||||
"momokai",
|
||||
"Janik",
|
||||
"kudari",
|
||||
"Naomi Hale Danchi",
|
||||
"dc7431",
|
||||
"ken",
|
||||
"Inversity",
|
||||
"Crocket",
|
||||
"AIVORY3D",
|
||||
"epicgamer0020690",
|
||||
"Joshua Porrata",
|
||||
"Cruel",
|
||||
"keemun",
|
||||
"SuBu",
|
||||
"RedPIXel",
|
||||
"MRBlack",
|
||||
"Kevinj",
|
||||
"Wind",
|
||||
"Nexus",
|
||||
"Mitchell Robson",
|
||||
"Ramneek“Guy”Ashok",
|
||||
"squid_actually",
|
||||
"Nat_20",
|
||||
"Kiyoe",
|
||||
"Edward Weeks",
|
||||
"kyoumei",
|
||||
"RadStorm04",
|
||||
"JohnDoe42054",
|
||||
"BillyHill",
|
||||
"humptynutz",
|
||||
"emyth",
|
||||
"michael.isaza",
|
||||
"Kalnei",
|
||||
"chriphost",
|
||||
"KitKatM",
|
||||
"socrasteeze",
|
||||
"ResidentDeviant",
|
||||
"Scott",
|
||||
"gzmzmvp",
|
||||
"Welkor",
|
||||
"Muratoraccio",
|
||||
"Ginnie",
|
||||
"emadsultan",
|
||||
"D",
|
||||
"nanana",
|
||||
"Fthehappy",
|
||||
"rsamerica",
|
||||
"Alan+Cano",
|
||||
"FeralOpticsAI",
|
||||
"Pavlaki",
|
||||
"generic404",
|
||||
"Doug+Rintoul",
|
||||
"Noor",
|
||||
"Yorunai",
|
||||
"quantenmecha",
|
||||
"abattoirblues",
|
||||
"Jason+Nash",
|
||||
"BillyBoy84",
|
||||
"zounik",
|
||||
"DarkRoast",
|
||||
"letzte",
|
||||
"Nasty+Hobbit",
|
||||
"Sora+Yori",
|
||||
"lrdchs2",
|
||||
"Duk3+Rand0m",
|
||||
"4IXplr0r3r",
|
||||
"hayden",
|
||||
"Richard",
|
||||
"ahoystan",
|
||||
"Leland Saunders",
|
||||
"Andrew",
|
||||
"Bob Barker",
|
||||
"Robert Wegemund",
|
||||
"Littlehuggy",
|
||||
"Gregory Kozhemiak",
|
||||
"mrjuan",
|
||||
"edk",
|
||||
"JBsuede",
|
||||
"Time Valentine",
|
||||
"Aeternyx",
|
||||
"Brian Buie",
|
||||
"YOU SINWOO",
|
||||
"Sadlip",
|
||||
"りん あめ",
|
||||
"ja s",
|
||||
"Eric Whitney",
|
||||
"Михал Михалыч",
|
||||
"Matt",
|
||||
"Doug Mason",
|
||||
"Joey Callahan",
|
||||
"Ivan Tadic",
|
||||
"y2Rxy7FdXzWo",
|
||||
"Jeremy Townsend",
|
||||
"Mike Simone",
|
||||
"Frogmilk",
|
||||
"Sean voets",
|
||||
"Owen Gwosdz",
|
||||
"Morgandel",
|
||||
"SPJ",
|
||||
"Thomas Wanner",
|
||||
"Kyron Mahan",
|
||||
"Theerat Jiramate",
|
||||
"Noah",
|
||||
"Jacob McDaniel",
|
||||
"Bryan Rutkowski",
|
||||
"Devil Lude",
|
||||
"David Murcko",
|
||||
"kevin stoddard",
|
||||
"Sloan Steddy",
|
||||
"Jack Dole",
|
||||
"Ezokewn",
|
||||
"Temikus",
|
||||
"Artokun",
|
||||
"Michael Taylor",
|
||||
"Derek Baker",
|
||||
"Michael Anthony Scott",
|
||||
"Atilla Berke Pekduyar",
|
||||
"max blo",
|
||||
"Xenon Xue",
|
||||
"CptNeo",
|
||||
"JackJohnnyJim",
|
||||
"Dmitry Ryzhov",
|
||||
"Maso",
|
||||
"Nathan",
|
||||
"Decx _",
|
||||
"Edward Ten Eyck",
|
||||
"Eric Ketchum",
|
||||
"Kevin Wallace",
|
||||
"Matheus Couto",
|
||||
"Paul Hartsuyker",
|
||||
"ChicRic",
|
||||
"Henrique Faiolli",
|
||||
"mercur",
|
||||
"Solixer",
|
||||
"J C",
|
||||
"Distortik",
|
||||
"jinksta187",
|
||||
"Andrew Wilkinson",
|
||||
"Manu Thetug",
|
||||
"Karlanx",
|
||||
"Yves Poezevara",
|
||||
"operationancut",
|
||||
"Teriak47",
|
||||
"Just me",
|
||||
"Raf Stahelin",
|
||||
@@ -505,123 +618,129 @@
|
||||
"RevyHiep",
|
||||
"Captain_Swag",
|
||||
"obkircher",
|
||||
"Tree Tagger",
|
||||
"gwyar",
|
||||
"D",
|
||||
"edgecase",
|
||||
"Neoxena",
|
||||
"mrmhalo",
|
||||
"dg",
|
||||
"Whitepinetrader",
|
||||
"Maarten Harms",
|
||||
"OrganicArtifact",
|
||||
"四糸凜音",
|
||||
"MudkipMedkitz",
|
||||
"Israel",
|
||||
"deanbrian",
|
||||
"POPPIN",
|
||||
"Muratoraccio",
|
||||
"SelfishMedic",
|
||||
"Ginnie",
|
||||
"Alex Wortman",
|
||||
"Cody",
|
||||
"adderleighn",
|
||||
"Raku",
|
||||
"smart.edge5178",
|
||||
"emadsultan",
|
||||
"InformedViewz",
|
||||
"CHKeeho80",
|
||||
"Bubbafett",
|
||||
"leaf",
|
||||
"Menard",
|
||||
"Skyfire83",
|
||||
"Adam Rinehart",
|
||||
"D",
|
||||
"Pitpe11",
|
||||
"TheD1rtyD03",
|
||||
"EnragedAntelope",
|
||||
"moonpetal",
|
||||
"SomeDude",
|
||||
"g9p0o",
|
||||
"nanana",
|
||||
"TheHolySheep",
|
||||
"Monte Won",
|
||||
"SpringBootisTrash",
|
||||
"carsten",
|
||||
"ikok",
|
||||
"Buecyb99",
|
||||
"4IXplr0r3r",
|
||||
"lighthawke",
|
||||
"Terraformer",
|
||||
"GDS+DEV",
|
||||
"4rt+r3d",
|
||||
"low9",
|
||||
"Winged",
|
||||
"you+halo9",
|
||||
"YassineKhaled",
|
||||
"YK12",
|
||||
"MatteKey",
|
||||
"Flob",
|
||||
"ShiroSenpai",
|
||||
"Somebody",
|
||||
"Inkognito",
|
||||
"Somebody",
|
||||
"Gramer+Gumbyte",
|
||||
"Crescent~San",
|
||||
"Tan+Huynh",
|
||||
"AiGirlTS",
|
||||
"D",
|
||||
"datasl4ve",
|
||||
"Somebody",
|
||||
"Dark_Pest",
|
||||
"Aza",
|
||||
"Jacky+Ho",
|
||||
"koopa990",
|
||||
"Karru",
|
||||
"ChaChanoKo",
|
||||
"null",
|
||||
"bo",
|
||||
"The+Forgetful+Dev",
|
||||
"redcarrot",
|
||||
"powerbot99",
|
||||
"Mateusz+Kosela",
|
||||
"Bula",
|
||||
"KUJYAKU",
|
||||
"Coeur+de+cochon",
|
||||
"David Schenck",
|
||||
"han b",
|
||||
"Nico",
|
||||
"Wolfe7D1",
|
||||
"Banana Joe",
|
||||
"_ G3n",
|
||||
"Donovan Jenkins",
|
||||
"Ink Temptation",
|
||||
"edk",
|
||||
"Tú Nguyễn Lý Hoàng",
|
||||
"Michael Eid",
|
||||
"beersandbacon",
|
||||
"Maximilian Pyko",
|
||||
"Invis",
|
||||
"Kalli Core",
|
||||
"Justin Houston",
|
||||
"Bob barker",
|
||||
"Ben D",
|
||||
"Garrett Wood",
|
||||
"Ronan Delevacq",
|
||||
"james",
|
||||
"elleshar666",
|
||||
"Christian Schäfer",
|
||||
"OrochiNights",
|
||||
"Michael Zhu",
|
||||
"ACTUALLY_the_Real_Willem_Dafoe",
|
||||
"gonzalo",
|
||||
"Seraphy",
|
||||
"雨の心 落",
|
||||
"AllTimeNoobie",
|
||||
"jumpd",
|
||||
"John C",
|
||||
"Kauffy",
|
||||
"Rim",
|
||||
"Dave Abraham",
|
||||
"Joaquin Hierrezuelo",
|
||||
"Dismem",
|
||||
"EpicElric",
|
||||
"John J Linehan",
|
||||
"Locrospiel",
|
||||
"Jairus Knudsen",
|
||||
"Jarrid Lee",
|
||||
"Xan Dionysus",
|
||||
"Nathan lee",
|
||||
"Kor",
|
||||
"Joseph Hanson",
|
||||
"Mewtora",
|
||||
"Elliot E",
|
||||
"Middo",
|
||||
"Forbidden Atelier",
|
||||
"Edward Kennedy",
|
||||
"Justin Blaylock",
|
||||
"John Rednoulf",
|
||||
"Spire",
|
||||
"Adictedtohumping",
|
||||
"Devil Lude",
|
||||
"Nick Kage",
|
||||
"Boba Smith",
|
||||
"Towelie",
|
||||
"Vane Holzer",
|
||||
"psytrax",
|
||||
"Cyrus Fett",
|
||||
"MR.Bear",
|
||||
"dsffsdfsdfsdfsdfsdf",
|
||||
"Jean-françois SEMA",
|
||||
"Kurt",
|
||||
"hexxish",
|
||||
"giani kidd",
|
||||
"CptNeo",
|
||||
"notedfakes",
|
||||
"ivistorm",
|
||||
"Sauv",
|
||||
"Steven",
|
||||
"TenaciousD",
|
||||
"Khánh Đặng",
|
||||
"Chase Kwon",
|
||||
"Ted Cart",
|
||||
"Inyoshu",
|
||||
"Goober719",
|
||||
"Eric Ketchum",
|
||||
"Chad Barnes",
|
||||
"NICHOLAS BAXLEY",
|
||||
"Michael Scott",
|
||||
"Person Y",
|
||||
"David Spearing",
|
||||
"James Ming",
|
||||
"vanditking",
|
||||
"kripitonga",
|
||||
"Rizzi",
|
||||
"nimin",
|
||||
"OMAR LUCIANO",
|
||||
"Ken+Suzuki",
|
||||
"hannibal",
|
||||
"Jo+Example",
|
||||
"BrentBertram",
|
||||
"Tigon",
|
||||
"eumelzocker",
|
||||
"dxjaymz",
|
||||
"L C",
|
||||
"Dude"
|
||||
"Dude",
|
||||
"CK"
|
||||
],
|
||||
"totalCount": 620
|
||||
"totalCount": 739
|
||||
}
|
||||
@@ -1,183 +0,0 @@
|
||||
## Overview
|
||||
|
||||
The **LoRA Manager Civitai Extension** is a Browser extension designed to work seamlessly with [LoRA Manager](https://github.com/willmiao/ComfyUI-Lora-Manager) to significantly enhance your browsing experience on [Civitai](https://civitai.com). With this extension, you can:
|
||||
|
||||
✅ Instantly see which models are already present in your local library
|
||||
✅ Download new models with a single click
|
||||
✅ Manage downloads efficiently with queue and parallel download support
|
||||
✅ Keep your downloaded models automatically organized according to your custom settings
|
||||
|
||||

|
||||
|
||||
**Update:** It now also supports browsing on [CivArchive](https://civarchive.com/) (formerly CivitaiArchive).
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Why Supporter Access?
|
||||
|
||||
LoRA Manager is built with love for the Stable Diffusion and ComfyUI communities. Your support makes it possible for me to keep improving and maintaining the tool full-time.
|
||||
|
||||
Supporter-exclusive features help ensure the long-term sustainability of LoRA Manager, allowing continuous updates, new features, and better performance for everyone.
|
||||
|
||||
Every contribution directly fuels development and keeps the core LoRA Manager free and open-source. In addition to monthly supporters, one-time donation supporters will also receive a license key, with the duration scaling according to the contribution amount. Thank you for helping keep this project alive and growing. ❤️
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Supported Browsers & Installation Methods
|
||||
|
||||
| Browser | Installation Method |
|
||||
|--------------------|-------------------------------------------------------------------------------------|
|
||||
| **Google Chrome** | [Chrome Web Store link](https://chromewebstore.google.com/detail/capigligggeijgmocnaflanlbghnamgm?utm_source=item-share-cb) |
|
||||
| **Microsoft Edge** | Install via Chrome Web Store (compatible) |
|
||||
| **Brave Browser** | Install via Chrome Web Store (compatible) |
|
||||
| **Opera** | Install via Chrome Web Store (compatible) |
|
||||
| **Firefox** | <div id="firefox-install" class="install-ok"><a href="https://github.com/willmiao/lm-civitai-extension-firefox/releases/latest/download/extension.xpi">📦 Install Firefox Extension (reviewed and verified by Mozilla)</a></div> |
|
||||
|
||||
For non-Chrome browsers (e.g., Microsoft Edge), you can typically install extensions from the Chrome Web Store by following these steps: open the extension’s Chrome Web Store page, click 'Get extension', then click 'Allow' when prompted to enable installations from other stores, and finally click 'Add extension' to complete the installation.
|
||||
|
||||
---
|
||||
|
||||
## Privacy & Security
|
||||
|
||||
I understand concerns around browser extensions and privacy, and I want to be fully transparent about how the **LM Civitai Extension** works:
|
||||
|
||||
- **Reviewed and Verified**
|
||||
This extension has been **manually reviewed and approved by the Chrome Web Store**. The Firefox version uses the **exact same code** (only the packaging format differs) and has passed **Mozilla’s Add-on review**.
|
||||
|
||||
- **Minimal Network Access**
|
||||
The only external server this extension connects to is:
|
||||
**`https://willmiao.shop`** — used solely for **license validation**.
|
||||
|
||||
It does **not collect, transmit, or store any personal or usage data**.
|
||||
No browsing history, no user IDs, no analytics, no hidden trackers.
|
||||
|
||||
- **Local-Only Model Detection**
|
||||
Model detection and LoRA Manager communication all happen **locally** within your browser, directly interacting with your local LoRA Manager backend.
|
||||
|
||||
I value your trust and are committed to keeping your local setup private and secure. If you have any questions, feel free to reach out!
|
||||
|
||||
---
|
||||
|
||||
## How to Use
|
||||
|
||||
After installing the extension, you'll automatically receive a **7-day trial** to explore all features.
|
||||
|
||||
When the extension is correctly installed and your license is valid:
|
||||
|
||||
- Open **Civitai**, and you'll see visual indicators added by the extension on model cards, showing:
|
||||
- ✅ Models already present in your local library
|
||||
- ⬇️ A download button for models not in your library
|
||||
|
||||
Clicking the download button adds the corresponding model version to the download queue, waiting to be downloaded. You can set up to **5 models to download simultaneously**.
|
||||
|
||||
### Visual Indicators Appear On:
|
||||
|
||||
- **Home Page** — Featured models
|
||||
- **Models Page**
|
||||
- **Creator Profiles** — If the creator has set their models to be visible
|
||||
- **Recommended Resources** — On individual model pages
|
||||
|
||||
### Version Buttons on Model Pages
|
||||
|
||||
On a specific model page, visual indicators also appear on version buttons, showing which versions are already in your local library.
|
||||
|
||||
**Starting from v0.4.8**, model pages use a dedicated download button for better compatibility. When switching to a specific version by clicking a version button:
|
||||
|
||||
- The new **dedicated download button** directly triggers download via **LoRA Manager**
|
||||
- The **original download button** remains unchanged for standard browser downloads
|
||||
|
||||

|
||||
|
||||
### Hide Models Already in Library (Beta)
|
||||
|
||||
**New in v0.4.8**: A new **Hide models already in library (Beta)** option makes it easier to focus on models you haven't added yet. It can be enabled from Settings, or toggled quickly using **Ctrl + Shift + H** (macOS: **Command + Shift + H**).
|
||||
|
||||
### Resources on Image Pages — now shows in-library indicators for image resources plus one-click recipe import
|
||||
|
||||
- **One-Click Import Civitai Image as Recipe** — Import any Civitai image as a recipe with a single click in the Resources Used panel.
|
||||
- **Auto-Queue Missing Assets** — In Settings you can decide if LoRAs or checkpoints referenced by that image should automatically be added to your download queue.
|
||||
- **More Accurate Metadata** — Importing directly from the page is faster than copying inside LM and keeps on-site tags and other metadata perfectly aligned.
|
||||
|
||||

|
||||
|
||||
[](https://github.com/user-attachments/assets/41fd4240-c949-4f83-bde7-8f3124c09494)
|
||||
|
||||
---
|
||||
|
||||
## Model Download Location & LoRA Manager Settings
|
||||
|
||||
To use the **one-click download function**, you must first set:
|
||||
|
||||
- Your **Default LoRAs Root**
|
||||
- Your **Default Checkpoints Root**
|
||||
|
||||
These are set within LoRA Manager's settings.
|
||||
|
||||
When everything is configured, downloaded model files will be placed in:
|
||||
|
||||
`<Default_Models_Root>/<Base_Model_of_the_Model>/<First_Tag_of_the_Model>`
|
||||
|
||||
|
||||
### Update: Default Path Customization (2025-07-21)
|
||||
|
||||
A new setting to customize the default download path has been added in the nightly version. You can now personalize where models are saved when downloading via the LM Civitai Extension.
|
||||
|
||||

|
||||
|
||||
The previous YAML path mapping file will be deprecated—settings will now be unified in settings.json to simplify configuration.
|
||||
|
||||
---
|
||||
|
||||
## Backend Port Configuration
|
||||
|
||||
If your **ComfyUI** or **LoRA Manager** backend is running on a port **other than the default 8188**, you must configure the backend port in the extension's settings.
|
||||
|
||||
After correctly setting and saving the port, you'll see in the extension's header area:
|
||||
- A **Healthy** status with the tooltip: `Connected to LoRA Manager on port xxxx`
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Connecting to a Remote LoRA Manager
|
||||
|
||||
If your LoRA Manager is running on another computer, you can still connect from your browser using port forwarding.
|
||||
|
||||
> **Why can't you set a remote IP directly?**
|
||||
>
|
||||
> For privacy and security, the extension only requests access to `http://127.0.0.1/*`. Supporting remote IPs would require much broader permissions, which may be rejected by browser stores and could raise user concerns.
|
||||
|
||||
**Solution: Port Forwarding with `socat`**
|
||||
|
||||
On your browser computer, run:
|
||||
|
||||
`socat TCP-LISTEN:8188,bind=127.0.0.1,fork TCP:REMOTE.IP.ADDRESS.HERE:8188`
|
||||
|
||||
- Replace `REMOTE.IP.ADDRESS.HERE` with the IP of the machine running LoRA Manager.
|
||||
- Adjust the port if needed.
|
||||
|
||||
This lets the extension connect to `127.0.0.1:8188` as usual, with traffic forwarded to your remote server.
|
||||
|
||||
_Thanks to user **Temikus** for sharing this solution!_
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
The extension will evolve alongside **LoRA Manager** improvements. Planned features include:
|
||||
|
||||
- [x] Support for **additional model types** (e.g., embeddings)
|
||||
- [x] One-click **Recipe Import**
|
||||
- [x] Display of in-library status for all resources in the **Resources Used** section of the image page
|
||||
- [x] One-click **Auto-organize Models**
|
||||
- [x] **Hide models already in library (Beta)** - Focus on models you haven't added yet
|
||||
|
||||
**Stay tuned — and thank you for your support!**
|
||||
|
||||
---
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
230
locales/de.json
230
locales/de.json
@@ -15,7 +15,8 @@
|
||||
"settings": "Einstellungen",
|
||||
"help": "Hilfe",
|
||||
"add": "Hinzufügen",
|
||||
"close": "Schließen"
|
||||
"close": "Schließen",
|
||||
"menu": "Menü"
|
||||
},
|
||||
"status": {
|
||||
"loading": "Wird geladen...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "{count} Rezepte erfolgreich repariert.",
|
||||
"cancelled": "Reparatur abgebrochen. {count} Rezepte wurden repariert.",
|
||||
"error": "Recipe-Reparatur fehlgeschlagen: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "Ausgeschlossene Modelle verwalten"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "Voreinstellung \"{name}\" existiert bereits. Überschreiben?",
|
||||
"presetNamePlaceholder": "Voreinstellungsname...",
|
||||
"baseModel": "Basis-Modell",
|
||||
"baseModelSearchPlaceholder": "Basismodelle durchsuchen...",
|
||||
"modelTags": "Tags (Top 20)",
|
||||
"modelTypes": "Modelltypen",
|
||||
"license": "Lizenz",
|
||||
"noCreditRequired": "Kein Credit erforderlich",
|
||||
"allowSellingGeneratedContent": "Verkauf erlaubt",
|
||||
"noTags": "Keine Tags",
|
||||
"noBaseModelMatches": "Keine Basismodelle entsprechen der aktuellen Suche.",
|
||||
"clearAll": "Alle Filter löschen",
|
||||
"any": "Beliebig",
|
||||
"all": "Alle",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai API Key",
|
||||
"civitaiApiKeyPlaceholder": "Geben Sie Ihren Civitai API Key ein",
|
||||
"civitaiApiKeyHelp": "Wird für die Authentifizierung beim Herunterladen von Modellen von Civitai verwendet",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai-Host",
|
||||
"help": "Wählen Sie aus, welche Civitai-Seite geöffnet wird, wenn Sie „View on Civitai“-Links verwenden.",
|
||||
"options": {
|
||||
"com": "civitai.com (nur SFW)",
|
||||
"red": "civitai.red (uneingeschränkt)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "Download-Backend",
|
||||
"help": "Wähle aus, wie Modelldateien heruntergeladen werden. Python verwendet den eingebauten Downloader. aria2 verwendet den experimentellen externen Downloader-Prozess.",
|
||||
"options": {
|
||||
"python": "Python (integriert)",
|
||||
"aria2": "aria2 (experimentell)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c-Pfad",
|
||||
"help": "Optionaler Pfad zur ausführbaren aria2c-Datei. Leer lassen, um aria2c aus dem System-PATH zu verwenden.",
|
||||
"placeholder": "Leer lassen, um aria2c aus dem PATH zu verwenden"
|
||||
},
|
||||
"aria2HelpLink": "Erfahren Sie, wie Sie das aria2-Download-Backend einrichten",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Civitai-Host-Einstellung verfügbar",
|
||||
"content": "Civitai verwendet jetzt civitai.com für SFW-Inhalte und civitai.red für uneingeschränkte Inhalte. In den Einstellungen können Sie ändern, welche Seite standardmäßig geöffnet wird.",
|
||||
"openSettings": "Einstellungen öffnen"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "Einstellungsordner öffnen",
|
||||
"tooltip": "Den Ordner mit der settings.json öffnen",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "Inhaltsfilterung",
|
||||
"downloads": "Downloads",
|
||||
"videoSettings": "Video-Einstellungen",
|
||||
"layoutSettings": "Layout-Einstellungen",
|
||||
"misc": "Verschiedenes",
|
||||
"backup": "Backups",
|
||||
"folderSettings": "Standard-Roots",
|
||||
"recipeSettings": "Rezepte",
|
||||
"extraFolderPaths": "Zusätzliche Ordnerpfade",
|
||||
"downloadPathTemplates": "Download-Pfad-Vorlagen",
|
||||
"priorityTags": "Prioritäts-Tags",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "NSFW-Inhalte unscharf stellen",
|
||||
"blurNsfwContentHelp": "Nicht jugendfreie (NSFW) Vorschaubilder unscharf stellen",
|
||||
"showOnlySfw": "Nur SFW-Ergebnisse anzeigen",
|
||||
"showOnlySfwHelp": "Alle NSFW-Inhalte beim Durchsuchen und Suchen herausfiltern"
|
||||
"showOnlySfwHelp": "Alle NSFW-Inhalte beim Durchsuchen und Suchen herausfiltern",
|
||||
"matureBlurThreshold": "Schwelle für Unschärfe bei jugendgefährdenden Inhalten",
|
||||
"matureBlurThresholdHelp": "Legen Sie fest, ab welcher Altersfreigabe die Unschärfe beginnt, wenn NSFW-Unschärfe aktiviert ist.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 und höher",
|
||||
"r": "R und höher (Standard)",
|
||||
"x": "X und höher",
|
||||
"xxx": "Nur XXX"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "Videos bei Hover automatisch abspielen",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "Übersprungene Pfade konnten nicht gespeichert werden: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "Automatische Backups",
|
||||
"autoEnabledHelp": "Erstellt einmal täglich einen lokalen Schnappschuss und behält die neuesten Schnappschüsse gemäß der Aufbewahrungsrichtlinie.",
|
||||
"retention": "Aufbewahrungsanzahl",
|
||||
"retentionHelp": "Wie viele automatische Schnappschüsse behalten werden, bevor ältere entfernt werden.",
|
||||
"management": "Backup-Verwaltung",
|
||||
"managementHelp": "Exportiere deinen aktuellen Benutzerstatus oder stelle ihn aus einem Backup-Archiv wieder her.",
|
||||
"scopeHelp": "Sichert deine Einstellungen, den Downloadverlauf und den Status der Modellaktualisierung. Modelldateien und neu erzeugbare Caches sind nicht enthalten.",
|
||||
"locationSummary": "Aktueller Backup-Speicherort",
|
||||
"openFolderButton": "Backup-Ordner öffnen",
|
||||
"openFolderSuccess": "Backup-Ordner geöffnet",
|
||||
"openFolderFailed": "Backup-Ordner konnte nicht geöffnet werden",
|
||||
"locationCopied": "Backup-Pfad in die Zwischenablage kopiert: {{path}}",
|
||||
"locationClipboardFallback": "Backup-Pfad: {{path}}",
|
||||
"exportButton": "Backup exportieren",
|
||||
"exportSuccess": "Backup erfolgreich exportiert.",
|
||||
"exportFailed": "Backup konnte nicht exportiert werden: {message}",
|
||||
"importButton": "Backup importieren",
|
||||
"importConfirm": "Dieses Backup importieren und den lokalen Benutzerstatus überschreiben?",
|
||||
"importSuccess": "Backup erfolgreich importiert.",
|
||||
"importFailed": "Backup konnte nicht importiert werden: {message}",
|
||||
"latestSnapshot": "Neuester Schnappschuss",
|
||||
"latestAutoSnapshot": "Neuester automatischer Schnappschuss",
|
||||
"snapshotCount": "Gespeicherte Schnappschüsse",
|
||||
"noneAvailable": "Noch keine Schnappschüsse vorhanden"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "Downloads für Basismodelle überspringen",
|
||||
"help": "Gilt für alle Download-Abläufe. Hier können nur unterstützte Basismodelle ausgewählt werden.",
|
||||
"searchPlaceholder": "Basismodelle filtern...",
|
||||
"empty": "Keine Basismodelle entsprechen der aktuellen Suche.",
|
||||
"summary": {
|
||||
"none": "Nichts ausgewählt",
|
||||
"count": "{count} ausgewählt"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "Bearbeiten",
|
||||
"collapse": "Einklappen",
|
||||
"clear": "Löschen"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "Ausgeschlossene Basismodelle konnten nicht gespeichert werden: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "Bereits heruntergeladene Modellversionen überspringen",
|
||||
"help": "Wenn aktiviert, überspringt LoRA Manager den Download einer Modellversion, wenn der Download-Verlaufsdienst diese spezifische Version als bereits heruntergeladen erfasst hat. Gilt für alle Download-Abläufe."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Anzeige-Dichte",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "Bei Hover anzeigen"
|
||||
},
|
||||
"cardInfoDisplayHelp": "Wählen Sie, wann Modellinformationen und Aktionsschaltflächen angezeigt werden sollen",
|
||||
"showVersionOnCard": "Version auf Karte anzeigen",
|
||||
"showVersionOnCardHelp": "Den Versionsnamen auf Modellkarten ein- oder ausblenden",
|
||||
"modelCardFooterAction": "Aktion der Modellkarten-Schaltfläche",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "Beispielbilder öffnen",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "Legen Sie den Standard-Diffusion-Modell-(UNET)-Stammordner für Downloads, Importe und Verschiebungen fest",
|
||||
"defaultEmbeddingRoot": "Embedding-Stammordner",
|
||||
"defaultEmbeddingRootHelp": "Legen Sie den Standard-Embedding-Stammordner für Downloads, Importe und Verschiebungen fest",
|
||||
"recipesPath": "Rezepte-Speicherpfad",
|
||||
"recipesPathHelp": "Optionales benutzerdefiniertes Verzeichnis für gespeicherte Rezepte. Leer lassen, um den recipes-Ordner im ersten LoRA-Stammverzeichnis zu verwenden.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "Rezepte-Speicher wird verschoben...",
|
||||
"noDefault": "Kein Standard"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "Zusätzliche Ordnerpfade",
|
||||
"help": "Fügen Sie zusätzliche Modellordner außerhalb der Standardpfade von ComfyUI hinzu. Diese Pfade werden separat gespeichert und zusammen mit den Standardordnern gescannt.",
|
||||
"description": "Konfigurieren Sie zusätzliche Ordner zum Scannen von Modellen. Diese Pfade sind spezifisch für LoRA Manager und werden mit den Standardpfaden von ComfyUI zusammengeführt.",
|
||||
"description": "Zusätzliche Modellstammverzeichnisse, die ausschließlich für LoRA Manager gelten. Laden Sie Modelle von Speicherorten außerhalb der Standardordner von ComfyUI – ideal für große Bibliotheken, die ComfyUI sonst verlangsamen würden.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA-Pfade",
|
||||
"checkpoint": "Checkpoint-Pfade",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embedding-Pfade"
|
||||
},
|
||||
"pathPlaceholder": "/pfad/zu/extra/modellen",
|
||||
"saveSuccess": "Zusätzliche Ordnerpfade aktualisiert.",
|
||||
"saveSuccess": "Zusätzliche Ordnerpfade aktualisiert. Neustart erforderlich, um Änderungen anzuwenden.",
|
||||
"saveError": "Fehler beim Aktualisieren der zusätzlichen Ordnerpfade: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "Dieser Pfad ist bereits konfiguriert"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Geben Sie den Ordnerpfad ein, wo Beispielbilder von Civitai gespeichert werden",
|
||||
"autoDownload": "Beispielbilder automatisch herunterladen",
|
||||
"autoDownloadHelp": "Beispielbilder automatisch für Modelle herunterladen, die keine haben (erfordert gesetzten Download-Speicherort)",
|
||||
"openMode": "Aktion für Beispielbilder öffnen",
|
||||
"openModeHelp": "Wählen Sie, ob die Aktion auf dem Server geöffnet, ein zugeordneter lokaler Pfad kopiert oder eine benutzerdefinierte URI gestartet werden soll.",
|
||||
"openModeOptions": {
|
||||
"system": "Auf Server öffnen",
|
||||
"clipboard": "Lokalen Pfad kopieren",
|
||||
"uriTemplate": "Benutzerdefinierte URI öffnen"
|
||||
},
|
||||
"localRoot": "Lokales Stammverzeichnis für Beispielbilder",
|
||||
"localRootHelp": "Optionales lokales oder eingebundenes Stammverzeichnis, das das Beispielbild-Verzeichnis des Servers widerspiegelt. Wenn leer, wird der Serverpfad wiederverwendet.",
|
||||
"localRootPlaceholder": "Beispiel: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "URI-Vorlage öffnen",
|
||||
"uriTemplateHelp": "Verwenden Sie einen benutzerdefinierten Deeplink wie eine Datei-URI oder einen Shortcuts-Link.",
|
||||
"uriTemplatePlaceholder": "Beispiel: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "Verfügbare Platzhalter: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "Mehr über Remote-Open-Modi erfahren",
|
||||
"optimizeImages": "Heruntergeladene Bilder optimieren",
|
||||
"optimizeImagesHelp": "Beispielbilder optimieren, um Dateigröße zu reduzieren und Ladegeschwindigkeit zu verbessern (Metadaten bleiben erhalten)",
|
||||
"download": "Herunterladen",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "Automatisch organisieren",
|
||||
"skipMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle überspringen",
|
||||
"resumeMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle fortsetzen",
|
||||
"deleteAll": "Alle Modelle löschen",
|
||||
"deleteAll": "Ausgewählte löschen",
|
||||
"downloadMissingLoras": "Fehlende LoRAs herunterladen",
|
||||
"clear": "Auswahl löschen",
|
||||
"skipMetadataRefreshCount": "Überspringen({count} Modelle)",
|
||||
"resumeMetadataRefreshCount": "Fortsetzen({count} Modelle)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "In Ordner verschieben",
|
||||
"repairMetadata": "Metadaten reparieren",
|
||||
"excludeModel": "Modell ausschließen",
|
||||
"restoreModel": "Modell wiederherstellen",
|
||||
"deleteModel": "Modell löschen",
|
||||
"shareRecipe": "Rezept teilen",
|
||||
"viewAllLoras": "Alle LoRAs anzeigen",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "In {otherType}-Ordner verschieben"
|
||||
"moveToOtherTypeFolder": "In {otherType}-Ordner verschieben",
|
||||
"sendToWorkflow": "An Workflow senden"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "Sidebar lösen",
|
||||
"switchToListView": "Zur Listenansicht wechseln",
|
||||
"switchToTreeView": "Zur Baumansicht wechseln",
|
||||
"recursiveOn": "Unterordner durchsuchen",
|
||||
"recursiveOff": "Nur aktuellen Ordner durchsuchen",
|
||||
"recursiveOn": "Unterordner einbeziehen",
|
||||
"recursiveOff": "Nur aktueller Ordner",
|
||||
"recursiveUnavailable": "Rekursive Suche ist nur in der Baumansicht verfügbar",
|
||||
"collapseAllDisabled": "Im Listenmodus nicht verfügbar",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "Early Access",
|
||||
"earlyAccessTooltip": "Early Access erforderlich",
|
||||
"inLibrary": "In Bibliothek",
|
||||
"downloaded": "Heruntergeladen",
|
||||
"downloadedTooltip": "Zuvor heruntergeladen, aber derzeit nicht in Ihrer Bibliothek.",
|
||||
"alreadyInLibrary": "Bereits in Bibliothek",
|
||||
"autoOrganizedPath": "[Automatisch organisiert durch Pfadvorlage]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "Basis-Modell aktualisieren",
|
||||
"cancel": "Abbrechen"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "Fehlende LoRAs herunterladen",
|
||||
"message": "{uniqueCount} einzigartige fehlende LoRAs gefunden (von insgesamt {totalCount} in ausgewählten Rezepten).",
|
||||
"previewTitle": "Zu herunterladende LoRAs:",
|
||||
"moreItems": "...und {count} weitere",
|
||||
"note": "Dateien werden mit Standard-Pfad-Vorlagen heruntergeladen. Dies kann je nach Anzahl der LoRAs eine Weile dauern.",
|
||||
"downloadButton": "{count} LoRA(s) herunterladen"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "Lokale Beispielbilder",
|
||||
"message": "Keine lokalen Beispielbilder für dieses Modell gefunden. Ansichtsoptionen:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Auf Civitai anzeigen",
|
||||
"viewOnCivitaiText": "Auf Civitai anzeigen",
|
||||
"viewCreatorProfile": "Ersteller-Profil anzeigen",
|
||||
"openFileLocation": "Dateispeicherort öffnen"
|
||||
"openFileLocation": "Dateispeicherort öffnen",
|
||||
"sendToWorkflow": "An ComfyUI senden",
|
||||
"sendToWorkflowText": "An ComfyUI senden"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "Dateispeicherort erfolgreich geöffnet",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "Pfad in die Zwischenablage kopiert: {{path}}",
|
||||
"clipboardFallback": "Pfad: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "Kann nicht an ComfyUI senden: Kein Dateipfad verfügbar"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "Version",
|
||||
"fileName": "Dateiname",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "Bearbeitung abbrechen",
|
||||
"save": "Änderungen speichern",
|
||||
"addPlaceholder": "Tippen zum Hinzufügen oder klicken Sie auf Vorschläge unten",
|
||||
"editWord": "Trigger Word bearbeiten",
|
||||
"editPlaceholder": "Trigger Word bearbeiten",
|
||||
"copyWord": "Trigger Word kopieren",
|
||||
"deleteWord": "Trigger Word löschen",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "in {count}d"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Aktuelle Version",
|
||||
"current": "Geöffnete Version",
|
||||
"currentTooltip": "Das ist die Version, mit der dieses Modal geöffnet wurde",
|
||||
"inLibrary": "In der Bibliothek",
|
||||
"inLibraryTooltip": "Diese Version befindet sich in Ihrer lokalen Bibliothek",
|
||||
"downloaded": "Heruntergeladen",
|
||||
"downloadedTooltip": "Diese Version wurde bereits heruntergeladen, befindet sich aber derzeit nicht in Ihrer Bibliothek",
|
||||
"newer": "Neuere Version",
|
||||
"newerTooltip": "Diese Version ist neuer als Ihre neueste lokale Version",
|
||||
"earlyAccess": "Früher Zugriff",
|
||||
"ignored": "Ignoriert"
|
||||
"earlyAccessTooltip": "Für diese Version ist derzeit Civitai Early Access erforderlich",
|
||||
"ignored": "Ignoriert",
|
||||
"ignoredTooltip": "Für diese Version sind Update-Benachrichtigungen deaktiviert",
|
||||
"onSiteOnly": "Nur On-Site",
|
||||
"onSiteOnlyTooltip": "Diese Version ist nur für die On-Site-Generierung auf Civitai verfügbar"
|
||||
},
|
||||
"actions": {
|
||||
"download": "Herunterladen",
|
||||
"downloadTooltip": "Diese Version herunterladen",
|
||||
"downloadEarlyAccessTooltip": "Diese Early-Access-Version von Civitai herunterladen",
|
||||
"downloadNotAllowedTooltip": "Diese Version ist nur für die On-Site-Generierung auf Civitai verfügbar",
|
||||
"delete": "Löschen",
|
||||
"deleteTooltip": "Diese lokale Version löschen",
|
||||
"ignore": "Ignorieren",
|
||||
"unignore": "Ignorierung aufheben",
|
||||
"ignoreTooltip": "Update-Benachrichtigungen für diese Version ignorieren",
|
||||
"unignoreTooltip": "Update-Benachrichtigungen für diese Version fortsetzen",
|
||||
"viewVersionOnCivitai": "Version auf Civitai anzeigen",
|
||||
"earlyAccessTooltip": "Erfordert Early-Access-Kauf",
|
||||
"resumeModelUpdates": "Aktualisierungen für dieses Modell fortsetzen",
|
||||
"ignoreModelUpdates": "Aktualisierungen für dieses Modell ignorieren",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "Rezept im Workflow ersetzt",
|
||||
"recipeFailedToSend": "Fehler beim Senden des Rezepts an den Workflow",
|
||||
"noMatchingNodes": "Keine kompatiblen Knoten im aktuellen Workflow verfügbar",
|
||||
"noTargetNodeSelected": "Kein Zielknoten ausgewählt"
|
||||
"noTargetNodeSelected": "Kein Zielknoten ausgewählt",
|
||||
"modelUpdated": "Modell im Workflow aktualisiert",
|
||||
"modelFailed": "Fehler beim Aktualisieren des Modellknotens"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Rezept",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "Beispielbilder-Ordner geöffnet",
|
||||
"openingFolder": "Beispielbilder-Ordner wird geöffnet",
|
||||
"failedToOpen": "Fehler beim Öffnen des Beispielbilder-Ordners",
|
||||
"copiedPath": "Pfad in Zwischenablage kopiert: {{path}}",
|
||||
"clipboardFallback": "Pfad: {{path}}",
|
||||
"copiedUri": "Link in Zwischenablage kopiert: {{uri}}",
|
||||
"uriClipboardFallback": "Link: {{uri}}",
|
||||
"setupRequired": "Beispielbilder-Speicher",
|
||||
"setupDescription": "Um benutzerdefinierte Beispielbilder hinzuzufügen, müssen Sie zuerst einen Download-Speicherort festlegen.",
|
||||
"setupUsage": "Dieser Pfad wird sowohl für heruntergeladene als auch für benutzerdefinierte Beispielbilder verwendet.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "Bitte wählen Sie eine Version aus",
|
||||
"versionExists": "Diese Version existiert bereits in Ihrer Bibliothek",
|
||||
"downloadCompleted": "Download erfolgreich abgeschlossen",
|
||||
"downloadSkippedByBaseModel": "Download übersprungen, weil das Basismodell {baseModel} ausgeschlossen ist",
|
||||
"autoOrganizeSuccess": "Automatische Organisation für {count} {type} erfolgreich abgeschlossen",
|
||||
"autoOrganizePartialSuccess": "Automatische Organisation abgeschlossen: {success} verschoben, {failures} fehlgeschlagen von insgesamt {total} Modellen",
|
||||
"autoOrganizeFailed": "Automatische Organisation fehlgeschlagen: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "Rezeptname erfolgreich aktualisiert",
|
||||
"tagsUpdated": "Rezept-Tags erfolgreich aktualisiert",
|
||||
"sourceUrlUpdated": "Quell-URL erfolgreich aktualisiert",
|
||||
"promptUpdated": "Prompt erfolgreich aktualisiert",
|
||||
"negativePromptUpdated": "Negativer Prompt erfolgreich aktualisiert",
|
||||
"promptEditorHint": "Drücken Sie Enter zum Speichern, Shift+Enter für neue Zeile",
|
||||
"noRecipeId": "Keine Rezept-ID verfügbar",
|
||||
"sendToWorkflowFailed": "Fehler beim Senden des Rezepts an den Workflow: {message}",
|
||||
"copyFailed": "Fehler beim Kopieren der Rezept-Syntax: {message}",
|
||||
"noMissingLoras": "Keine fehlenden LoRAs zum Herunterladen",
|
||||
"missingLorasInfoFailed": "Fehler beim Abrufen der Informationen für fehlende LoRAs",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "Keine Rezepte ausgewählt",
|
||||
"noMissingLorasInSelection": "Keine fehlenden LoRAs in ausgewählten Rezepten gefunden",
|
||||
"noLoraRootConfigured": "Kein LoRA-Stammverzeichnis konfiguriert. Bitte legen Sie ein Standard-LoRA-Stammverzeichnis in den Einstellungen fest."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "Keine Modelle ausgewählt",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "Fehler beim Speichern der Basis-Modell-Zuordnungen: {message}",
|
||||
"downloadTemplatesUpdated": "Download-Pfad-Vorlagen aktualisiert",
|
||||
"downloadTemplatesFailed": "Fehler beim Speichern der Download-Pfad-Vorlagen: {message}",
|
||||
"recipesPathUpdated": "Rezepte-Speicherpfad aktualisiert",
|
||||
"recipesPathSaveFailed": "Fehler beim Aktualisieren des Rezepte-Speicherpfads: {message}",
|
||||
"settingsUpdated": "Einstellungen aktualisiert: {setting}",
|
||||
"compactModeToggled": "Kompakt-Modus {state}",
|
||||
"settingSaveFailed": "Fehler beim Speichern der Einstellung: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "Konnte trainierte Wörter nicht laden",
|
||||
"tooLong": "Trigger Word sollte 100 Wörter nicht überschreiten",
|
||||
"tooMany": "Maximal 30 Trigger Words erlaubt",
|
||||
"tooLong": "Trigger Word sollte 500 Wörter nicht überschreiten",
|
||||
"tooMany": "Maximal 100 Trigger Words erlaubt",
|
||||
"alreadyExists": "Dieses Trigger Word existiert bereits",
|
||||
"updateSuccess": "Trigger Words erfolgreich aktualisiert",
|
||||
"updateFailed": "Fehler beim Aktualisieren der Trigger Words",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "Fehler beim Löschen von {type}: {message}",
|
||||
"excludeSuccess": "{type} erfolgreich ausgeschlossen",
|
||||
"excludeFailed": "Fehler beim Ausschließen von {type}: {message}",
|
||||
"restoreSuccess": "{type} erfolgreich wiederhergestellt",
|
||||
"restoreFailed": "{type} konnte nicht wiederhergestellt werden: {message}",
|
||||
"fileNameUpdated": "Dateiname erfolgreich aktualisiert",
|
||||
"fileRenameFailed": "Fehler beim Umbenennen der Datei: {error}",
|
||||
"previewUpdated": "Vorschau erfolgreich aktualisiert",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "Systemdiagnose",
|
||||
"title": "Doktor",
|
||||
"buttonTitle": "Diagnose und häufige Fehlerbehebungen ausführen",
|
||||
"loading": "Umgebung wird geprüft...",
|
||||
"footer": "Exportiere ein Diagnosepaket, falls das Problem nach der Reparatur weiterhin besteht.",
|
||||
"summary": {
|
||||
"idle": "Führe eine Überprüfung von Einstellungen, Cache-Integrität und UI-Konsistenz durch.",
|
||||
"ok": "Keine aktiven Probleme wurden in der aktuellen Umgebung gefunden.",
|
||||
"warning": "{count} Problem(e) wurden gefunden. Die meisten lassen sich direkt über dieses Panel beheben.",
|
||||
"error": "Bevor die App vollständig fehlerfrei ist, müssen {count} Problem(e) behoben werden."
|
||||
},
|
||||
"status": {
|
||||
"ok": "Gesund",
|
||||
"warning": "Handlungsbedarf",
|
||||
"error": "Aktion erforderlich"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "Erneut ausführen",
|
||||
"exportBundle": "Paket exportieren"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "Diagnose konnte nicht geladen werden: {message}",
|
||||
"repairSuccess": "Cache-Neuaufbau abgeschlossen.",
|
||||
"repairFailed": "Cache-Neuaufbau fehlgeschlagen: {message}",
|
||||
"exportSuccess": "Diagnosepaket exportiert.",
|
||||
"exportFailed": "Export des Diagnosepakets fehlgeschlagen: {message}",
|
||||
"conflictsResolved": "{count} Dateinamenskonflikt(e) gelöst.",
|
||||
"conflictsResolveFailed": "Auflösung der Dateinamenskonflikte fehlgeschlagen: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "Anwendungs-Update erkannt",
|
||||
|
||||
240
locales/en.json
240
locales/en.json
@@ -15,7 +15,8 @@
|
||||
"settings": "Settings",
|
||||
"help": "Help",
|
||||
"add": "Add",
|
||||
"close": "Close"
|
||||
"close": "Close",
|
||||
"menu": "Menu"
|
||||
},
|
||||
"status": {
|
||||
"loading": "Loading...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "Successfully repaired {count} recipes.",
|
||||
"cancelled": "Repair cancelled. {count} recipes were repaired.",
|
||||
"error": "Recipe repair failed: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "Manage Excluded Models"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "Preset \"{name}\" already exists. Overwrite?",
|
||||
"presetNamePlaceholder": "Preset name...",
|
||||
"baseModel": "Base Model",
|
||||
"baseModelSearchPlaceholder": "Search base models...",
|
||||
"modelTags": "Tags (Top 20)",
|
||||
"modelTypes": "Model Types",
|
||||
"license": "License",
|
||||
"noCreditRequired": "No Credit Required",
|
||||
"allowSellingGeneratedContent": "Allow Selling",
|
||||
"noTags": "No tags",
|
||||
"noBaseModelMatches": "No base models match the current search.",
|
||||
"clearAll": "Clear All Filters",
|
||||
"any": "Any",
|
||||
"all": "All",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai API Key",
|
||||
"civitaiApiKeyPlaceholder": "Enter your Civitai API key",
|
||||
"civitaiApiKeyHelp": "Used for authentication when downloading models from Civitai",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai host",
|
||||
"help": "Choose which Civitai site opens when using View on Civitai links.",
|
||||
"options": {
|
||||
"com": "civitai.com (SFW)",
|
||||
"red": "civitai.red (unrestricted)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "Download backend",
|
||||
"help": "Choose how model files are downloaded. Python uses the built-in downloader. aria2 uses the experimental external downloader process.",
|
||||
"options": {
|
||||
"python": "Python (built-in)",
|
||||
"aria2": "aria2 (experimental)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c path",
|
||||
"help": "Optional path to the aria2c executable. Leave empty to use aria2c from your system PATH.",
|
||||
"placeholder": "Leave empty to use aria2c from PATH"
|
||||
},
|
||||
"aria2HelpLink": "Learn how to set up the aria2 download backend",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Civitai host preference available",
|
||||
"content": "Civitai now uses civitai.com for SFW content and civitai.red for unrestricted content. You can change which site opens by default in Settings.",
|
||||
"openSettings": "Open Settings"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "Open settings folder",
|
||||
"tooltip": "Open folder containing settings.json",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "Content Filtering",
|
||||
"downloads": "Downloads",
|
||||
"videoSettings": "Video Settings",
|
||||
"layoutSettings": "Layout Settings",
|
||||
"misc": "Miscellaneous",
|
||||
"backup": "Backups",
|
||||
"folderSettings": "Default Roots",
|
||||
"recipeSettings": "Recipes",
|
||||
"extraFolderPaths": "Extra Folder Paths",
|
||||
"downloadPathTemplates": "Download Path Templates",
|
||||
"priorityTags": "Priority Tags",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "Blur NSFW Content",
|
||||
"blurNsfwContentHelp": "Blur mature (NSFW) content preview images",
|
||||
"showOnlySfw": "Show Only SFW Results",
|
||||
"showOnlySfwHelp": "Filter out all NSFW content when browsing and searching"
|
||||
"showOnlySfwHelp": "Filter out all NSFW content when browsing and searching",
|
||||
"matureBlurThreshold": "Mature Blur Threshold",
|
||||
"matureBlurThresholdHelp": "Set which rating level starts blur filtering when NSFW blur is enabled.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 and above",
|
||||
"r": "R and above (default)",
|
||||
"x": "X and above",
|
||||
"xxx": "XXX only"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "Autoplay Videos on Hover",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "Unable to save skip paths: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "Automatic backups",
|
||||
"autoEnabledHelp": "Create a local snapshot once per day and keep the latest snapshots according to the retention policy.",
|
||||
"retention": "Retention count",
|
||||
"retentionHelp": "How many automatic snapshots to keep before older ones are pruned.",
|
||||
"management": "Backup management",
|
||||
"managementHelp": "Export your current user state or restore it from a backup archive.",
|
||||
"scopeHelp": "Backs up your settings, download history, and model update state. It does not include model files or rebuildable caches.",
|
||||
"locationSummary": "Current backup location",
|
||||
"openFolderButton": "Open backup folder",
|
||||
"openFolderSuccess": "Opened backup folder",
|
||||
"openFolderFailed": "Failed to open backup folder",
|
||||
"locationCopied": "Backup path copied to clipboard: {{path}}",
|
||||
"locationClipboardFallback": "Backup path: {{path}}",
|
||||
"exportButton": "Export backup",
|
||||
"exportSuccess": "Backup exported successfully.",
|
||||
"exportFailed": "Failed to export backup: {message}",
|
||||
"importButton": "Import backup",
|
||||
"importConfirm": "Import this backup and overwrite local user state?",
|
||||
"importSuccess": "Backup imported successfully.",
|
||||
"importFailed": "Failed to import backup: {message}",
|
||||
"latestSnapshot": "Latest snapshot",
|
||||
"latestAutoSnapshot": "Latest automatic snapshot",
|
||||
"snapshotCount": "Saved snapshots",
|
||||
"noneAvailable": "No snapshots yet"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "Skip downloads for base models",
|
||||
"help": "When enabled, versions using the selected base models will be skipped.",
|
||||
"searchPlaceholder": "Filter base models...",
|
||||
"empty": "No base models match the current search.",
|
||||
"summary": {
|
||||
"none": "None selected",
|
||||
"count": "{count} selected"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "Edit",
|
||||
"collapse": "Collapse",
|
||||
"clear": "Clear"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "Unable to save excluded base models: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "Skip previously downloaded model versions",
|
||||
"help": "When enabled, versions downloaded before will be skipped."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Display Density",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "Reveal on Hover"
|
||||
},
|
||||
"cardInfoDisplayHelp": "Choose when to display model information and action buttons",
|
||||
"showVersionOnCard": "Show Version on Card",
|
||||
"showVersionOnCardHelp": "Show or hide the version name on model cards",
|
||||
"modelCardFooterAction": "Model Card Button Action",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "Open Example Images",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "Set default diffusion model (UNET) root directory for downloads, imports and moves",
|
||||
"defaultEmbeddingRoot": "Embedding Root",
|
||||
"defaultEmbeddingRootHelp": "Set default embedding root directory for downloads, imports and moves",
|
||||
"recipesPath": "Recipes Storage Path",
|
||||
"recipesPathHelp": "Optional custom directory for stored recipes. Leave empty to use the first LoRA root's recipes folder.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "Migrating recipes storage...",
|
||||
"noDefault": "No Default"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "Extra Folder Paths",
|
||||
"help": "Add additional model folders outside of ComfyUI's standard paths. These paths are stored separately and scanned alongside the default folders.",
|
||||
"description": "Configure additional folders to scan for models. These paths are specific to LoRA Manager and will be merged with ComfyUI's default paths.",
|
||||
"description": "Additional model root paths exclusive to LoRA Manager. Load models from locations outside ComfyUI's standard folders—ideal for large libraries that would otherwise slow down ComfyUI.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA Paths",
|
||||
"checkpoint": "Checkpoint Paths",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embedding Paths"
|
||||
},
|
||||
"pathPlaceholder": "/path/to/extra/models",
|
||||
"saveSuccess": "Extra folder paths updated.",
|
||||
"saveSuccess": "Extra folder paths updated. Restart required to apply changes.",
|
||||
"saveError": "Failed to update extra folder paths: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "This path is already configured"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Enter the folder path where example images from Civitai will be saved",
|
||||
"autoDownload": "Auto Download Example Images",
|
||||
"autoDownloadHelp": "Automatically download example images for models that don't have them (requires download location to be set)",
|
||||
"openMode": "Open Example Images Action",
|
||||
"openModeHelp": "Choose whether the action opens on the server, copies a mapped local path, or launches a custom URI.",
|
||||
"openModeOptions": {
|
||||
"system": "Open on server",
|
||||
"clipboard": "Copy local path",
|
||||
"uriTemplate": "Open custom URI"
|
||||
},
|
||||
"localRoot": "Local Example Images Root",
|
||||
"localRootHelp": "Optional local or mounted root that mirrors the server example images directory. If blank, the server path is reused.",
|
||||
"localRootPlaceholder": "Example: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "Open URI Template",
|
||||
"uriTemplateHelp": "Use a custom deep link such as a file URI or a Shortcuts link.",
|
||||
"uriTemplatePlaceholder": "Example: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "Available placeholders: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "Learn more about remote open modes",
|
||||
"optimizeImages": "Optimize Downloaded Images",
|
||||
"optimizeImagesHelp": "Optimize example images to reduce file size and improve loading speed (metadata will be preserved)",
|
||||
"download": "Download",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "Auto-Organize Selected",
|
||||
"skipMetadataRefresh": "Skip Metadata Refresh for Selected",
|
||||
"resumeMetadataRefresh": "Resume Metadata Refresh for Selected",
|
||||
"deleteAll": "Delete Selected Models",
|
||||
"deleteAll": "Delete Selected",
|
||||
"downloadMissingLoras": "Download Missing LoRAs",
|
||||
"clear": "Clear Selection",
|
||||
"skipMetadataRefreshCount": "Skip ({count} models)",
|
||||
"resumeMetadataRefreshCount": "Resume ({count} models)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "Move to Folder",
|
||||
"repairMetadata": "Repair metadata",
|
||||
"excludeModel": "Exclude Model",
|
||||
"restoreModel": "Restore Model",
|
||||
"deleteModel": "Delete Model",
|
||||
"shareRecipe": "Share Recipe",
|
||||
"viewAllLoras": "View All LoRAs",
|
||||
@@ -622,9 +737,9 @@
|
||||
"title": "Import a recipe from image or URL",
|
||||
"urlLocalPath": "URL / Local Path",
|
||||
"uploadImage": "Upload Image",
|
||||
"urlSectionDescription": "Input a Civitai image URL or local file path to import as a recipe.",
|
||||
"urlSectionDescription": "Input a Civitai image URL from civitai.com or civitai.red, or a local file path, to import as a recipe.",
|
||||
"imageUrlOrPath": "Image URL or File Path:",
|
||||
"urlPlaceholder": "https://civitai.com/images/... or C:/path/to/image.png",
|
||||
"urlPlaceholder": "https://civitai.com/images/... or https://civitai.red/images/... or C:/path/to/image.png",
|
||||
"fetchImage": "Fetch Image",
|
||||
"uploadSectionDescription": "Upload an image with LoRA metadata to import as a recipe.",
|
||||
"selectImage": "Select Image",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "Move to {otherType} Folder"
|
||||
"moveToOtherTypeFolder": "Move to {otherType} Folder",
|
||||
"sendToWorkflow": "Send to Workflow"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "Unpin Sidebar",
|
||||
"switchToListView": "Switch to List View",
|
||||
"switchToTreeView": "Switch to Tree View",
|
||||
"recursiveOn": "Search subfolders",
|
||||
"recursiveOff": "Search current folder only",
|
||||
"recursiveOn": "Include subfolders",
|
||||
"recursiveOff": "Current folder only",
|
||||
"recursiveUnavailable": "Recursive search is available in tree view only",
|
||||
"collapseAllDisabled": "Not available in list view",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "Early Access",
|
||||
"earlyAccessTooltip": "Early access required",
|
||||
"inLibrary": "In Library",
|
||||
"downloaded": "Downloaded",
|
||||
"downloadedTooltip": "Previously downloaded, but it is not currently in your library.",
|
||||
"alreadyInLibrary": "Already in Library",
|
||||
"autoOrganizedPath": "[Auto-organized by path template]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "Update Base Model",
|
||||
"cancel": "Cancel"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "Download Missing LoRAs",
|
||||
"message": "Found {uniqueCount} unique missing LoRAs (from {totalCount} total across selected recipes).",
|
||||
"previewTitle": "LoRAs to download:",
|
||||
"moreItems": "...and {count} more",
|
||||
"note": "Files will be downloaded using default path templates. This may take a while depending on the number of LoRAs.",
|
||||
"downloadButton": "Download {count} LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "Local Example Images",
|
||||
"message": "No local example images found for this model. View options:",
|
||||
@@ -1016,9 +1142,9 @@
|
||||
},
|
||||
"proceedText": "Only proceed if you're sure this is what you want.",
|
||||
"urlLabel": "Civitai Model URL:",
|
||||
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
|
||||
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676 or https://civitai.red/models/649516/model-name?modelVersionId=726676",
|
||||
"helpText": {
|
||||
"title": "Paste any Civitai model URL. Supported formats:",
|
||||
"title": "Paste any Civitai model URL from civitai.com or civitai.red. Supported formats:",
|
||||
"format1": "https://civitai.com/models/649516",
|
||||
"format2": "https://civitai.com/models/649516?modelVersionId=726676",
|
||||
"format3": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "View on Civitai",
|
||||
"viewOnCivitaiText": "View on Civitai",
|
||||
"viewCreatorProfile": "View Creator Profile",
|
||||
"openFileLocation": "Open File Location"
|
||||
"openFileLocation": "Open File Location",
|
||||
"sendToWorkflow": "Send to ComfyUI",
|
||||
"sendToWorkflowText": "Send to ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "File location opened successfully",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "Path copied to clipboard: {{path}}",
|
||||
"clipboardFallback": "Path: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "Unable to send to ComfyUI: No file path available"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "Version",
|
||||
"fileName": "File Name",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "Cancel editing",
|
||||
"save": "Save changes",
|
||||
"addPlaceholder": "Type to add or click suggestions below",
|
||||
"editWord": "Edit trigger word",
|
||||
"editPlaceholder": "Edit trigger word",
|
||||
"copyWord": "Copy trigger word",
|
||||
"deleteWord": "Delete trigger word",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "in {count}d"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Current Version",
|
||||
"current": "Opened Version",
|
||||
"currentTooltip": "This is the version you opened this modal from",
|
||||
"inLibrary": "In Library",
|
||||
"inLibraryTooltip": "This version exists in your local library",
|
||||
"downloaded": "Downloaded",
|
||||
"downloadedTooltip": "This version was downloaded before, but is not currently in your library",
|
||||
"newer": "Newer Version",
|
||||
"newerTooltip": "This version is newer than your latest local version",
|
||||
"earlyAccess": "Early Access",
|
||||
"ignored": "Ignored"
|
||||
"earlyAccessTooltip": "This version currently requires Civitai early access",
|
||||
"ignored": "Ignored",
|
||||
"ignoredTooltip": "Update notifications are disabled for this version",
|
||||
"onSiteOnly": "On-Site Only",
|
||||
"onSiteOnlyTooltip": "This version is only available for on-site generation on Civitai"
|
||||
},
|
||||
"actions": {
|
||||
"download": "Download",
|
||||
"downloadTooltip": "Download this version",
|
||||
"downloadEarlyAccessTooltip": "Download this early access version from Civitai",
|
||||
"downloadNotAllowedTooltip": "This version is only available for on-site generation on Civitai",
|
||||
"delete": "Delete",
|
||||
"deleteTooltip": "Delete this local version",
|
||||
"ignore": "Ignore",
|
||||
"unignore": "Unignore",
|
||||
"ignoreTooltip": "Ignore update notifications for this version",
|
||||
"unignoreTooltip": "Resume update notifications for this version",
|
||||
"viewVersionOnCivitai": "View version on Civitai",
|
||||
"earlyAccessTooltip": "Requires early access purchase",
|
||||
"resumeModelUpdates": "Resume updates for this model",
|
||||
"ignoreModelUpdates": "Ignore updates for this model",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "Recipe replaced in workflow",
|
||||
"recipeFailedToSend": "Failed to send recipe to workflow",
|
||||
"noMatchingNodes": "No compatible nodes available in the current workflow",
|
||||
"noTargetNodeSelected": "No target node selected"
|
||||
"noTargetNodeSelected": "No target node selected",
|
||||
"modelUpdated": "Model updated in workflow",
|
||||
"modelFailed": "Failed to update model node"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Recipe",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "Example images folder opened",
|
||||
"openingFolder": "Opening example images folder",
|
||||
"failedToOpen": "Failed to open example images folder",
|
||||
"copiedPath": "Path copied to clipboard: {{path}}",
|
||||
"clipboardFallback": "Path: {{path}}",
|
||||
"copiedUri": "Link copied to clipboard: {{uri}}",
|
||||
"uriClipboardFallback": "Link: {{uri}}",
|
||||
"setupRequired": "Example Images Storage",
|
||||
"setupDescription": "To add custom example images, you need to set a download location first.",
|
||||
"setupUsage": "This path is used for both downloaded and custom example images.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "Please select a version",
|
||||
"versionExists": "This version already exists in your library",
|
||||
"downloadCompleted": "Download completed successfully",
|
||||
"downloadSkippedByBaseModel": "Skipped download because base model {baseModel} is excluded",
|
||||
"autoOrganizeSuccess": "Auto-organize completed successfully for {count} {type}",
|
||||
"autoOrganizePartialSuccess": "Auto-organize completed with {success} moved, {failures} failed out of {total} models",
|
||||
"autoOrganizeFailed": "Auto-organize failed: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "Recipe name updated successfully",
|
||||
"tagsUpdated": "Recipe tags updated successfully",
|
||||
"sourceUrlUpdated": "Source URL updated successfully",
|
||||
"promptUpdated": "Prompt updated successfully",
|
||||
"negativePromptUpdated": "Negative prompt updated successfully",
|
||||
"promptEditorHint": "Press Enter to save, Shift+Enter for new line",
|
||||
"noRecipeId": "No recipe ID available",
|
||||
"sendToWorkflowFailed": "Failed to send recipe to workflow: {message}",
|
||||
"copyFailed": "Error copying recipe syntax: {message}",
|
||||
"noMissingLoras": "No missing LoRAs to download",
|
||||
"missingLorasInfoFailed": "Failed to get information for missing LoRAs",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "No recipes selected",
|
||||
"noMissingLorasInSelection": "No missing LoRAs found in selected recipes",
|
||||
"noLoraRootConfigured": "No LoRA root directory configured. Please set a default LoRA root in settings."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "No models selected",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "Failed to save base model mappings: {message}",
|
||||
"downloadTemplatesUpdated": "Download path templates updated",
|
||||
"downloadTemplatesFailed": "Failed to save download path templates: {message}",
|
||||
"recipesPathUpdated": "Recipes storage path updated",
|
||||
"recipesPathSaveFailed": "Failed to update recipes storage path: {message}",
|
||||
"settingsUpdated": "Settings updated: {setting}",
|
||||
"compactModeToggled": "Compact Mode {state}",
|
||||
"settingSaveFailed": "Failed to save setting: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "Could not load trained words",
|
||||
"tooLong": "Trigger word should not exceed 100 words",
|
||||
"tooMany": "Maximum 30 trigger words allowed",
|
||||
"tooLong": "Trigger word should not exceed 500 words",
|
||||
"tooMany": "Maximum 100 trigger words allowed",
|
||||
"alreadyExists": "This trigger word already exists",
|
||||
"updateSuccess": "Trigger words updated successfully",
|
||||
"updateFailed": "Failed to update trigger words",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "Failed to delete {type}: {message}",
|
||||
"excludeSuccess": "{type} excluded successfully",
|
||||
"excludeFailed": "Failed to exclude {type}: {message}",
|
||||
"restoreSuccess": "{type} restored successfully",
|
||||
"restoreFailed": "Failed to restore {type}: {message}",
|
||||
"fileNameUpdated": "File name updated successfully",
|
||||
"fileRenameFailed": "Failed to rename file: {error}",
|
||||
"previewUpdated": "Preview updated successfully",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "System diagnostics",
|
||||
"title": "Doctor",
|
||||
"buttonTitle": "Run diagnostics and common fixes",
|
||||
"loading": "Checking environment...",
|
||||
"footer": "Export a diagnostics bundle if the issue still persists after repair.",
|
||||
"summary": {
|
||||
"idle": "Run a health check for settings, cache integrity, and UI consistency.",
|
||||
"ok": "No active issues were found in the current environment.",
|
||||
"warning": "{count} issue(s) were found. Most can be fixed directly from this panel.",
|
||||
"error": "{count} issue(s) need attention before the app is fully healthy."
|
||||
},
|
||||
"status": {
|
||||
"ok": "Healthy",
|
||||
"warning": "Needs Attention",
|
||||
"error": "Action Required"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "Run Again",
|
||||
"exportBundle": "Export Bundle"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "Failed to load diagnostics: {message}",
|
||||
"repairSuccess": "Cache rebuild completed.",
|
||||
"repairFailed": "Cache rebuild failed: {message}",
|
||||
"exportSuccess": "Diagnostics bundle exported.",
|
||||
"exportFailed": "Failed to export diagnostics bundle: {message}",
|
||||
"conflictsResolved": "{count} filename conflict(s) resolved.",
|
||||
"conflictsResolveFailed": "Failed to resolve filename conflicts: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "Application Update Detected",
|
||||
@@ -1746,4 +1944,4 @@
|
||||
"retry": "Retry"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
230
locales/es.json
230
locales/es.json
@@ -15,7 +15,8 @@
|
||||
"settings": "Configuración",
|
||||
"help": "Ayuda",
|
||||
"add": "Añadir",
|
||||
"close": "Cerrar"
|
||||
"close": "Cerrar",
|
||||
"menu": "Menú"
|
||||
},
|
||||
"status": {
|
||||
"loading": "Cargando...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "Se repararon con éxito {count} recetas.",
|
||||
"cancelled": "Reparación cancelada. {count} recetas fueron reparadas.",
|
||||
"error": "Error al reparar recetas: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "Gestionar modelos excluidos"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "El preset \"{name}\" ya existe. ¿Sobrescribir?",
|
||||
"presetNamePlaceholder": "Nombre del preajuste...",
|
||||
"baseModel": "Modelo base",
|
||||
"baseModelSearchPlaceholder": "Buscar modelos base...",
|
||||
"modelTags": "Etiquetas (Top 20)",
|
||||
"modelTypes": "Tipos de modelos",
|
||||
"license": "Licencia",
|
||||
"noCreditRequired": "Sin crédito requerido",
|
||||
"allowSellingGeneratedContent": "Venta permitida",
|
||||
"noTags": "Sin etiquetas",
|
||||
"noBaseModelMatches": "Ningún modelo base coincide con la búsqueda actual.",
|
||||
"clearAll": "Limpiar todos los filtros",
|
||||
"any": "Cualquiera",
|
||||
"all": "Todos",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Clave API de Civitai",
|
||||
"civitaiApiKeyPlaceholder": "Introduce tu clave API de Civitai",
|
||||
"civitaiApiKeyHelp": "Utilizada para autenticación al descargar modelos de Civitai",
|
||||
"civitaiHost": {
|
||||
"label": "Host de Civitai",
|
||||
"help": "Elige qué sitio de Civitai se abre al usar los enlaces de \"View on Civitai\".",
|
||||
"options": {
|
||||
"com": "civitai.com (solo SFW)",
|
||||
"red": "civitai.red (sin restricciones)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "Backend de descarga",
|
||||
"help": "Elige cómo se descargan los archivos del modelo. Python usa el descargador integrado. aria2 usa el proceso externo experimental de descarga.",
|
||||
"options": {
|
||||
"python": "Python (integrado)",
|
||||
"aria2": "aria2 (experimental)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "Ruta de aria2c",
|
||||
"help": "Ruta opcional al ejecutable aria2c. Déjalo vacío para usar aria2c desde el PATH del sistema.",
|
||||
"placeholder": "Déjalo vacío para usar aria2c desde el PATH"
|
||||
},
|
||||
"aria2HelpLink": "Aprende a configurar el backend de descarga aria2",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Preferencia de host de Civitai disponible",
|
||||
"content": "Civitai ahora usa civitai.com para contenido SFW y civitai.red para contenido sin restricciones. Puedes cambiar en Ajustes qué sitio se abre por defecto.",
|
||||
"openSettings": "Abrir ajustes"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "Abrir carpeta de ajustes",
|
||||
"tooltip": "Abrir la carpeta que contiene settings.json",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "Filtrado de contenido",
|
||||
"downloads": "Descargas",
|
||||
"videoSettings": "Configuración de video",
|
||||
"layoutSettings": "Configuración de diseño",
|
||||
"misc": "Varios",
|
||||
"backup": "Copias de seguridad",
|
||||
"folderSettings": "Raíces predeterminadas",
|
||||
"recipeSettings": "Recetas",
|
||||
"extraFolderPaths": "Rutas de carpetas adicionales",
|
||||
"downloadPathTemplates": "Plantillas de rutas de descarga",
|
||||
"priorityTags": "Etiquetas prioritarias",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "Difuminar contenido NSFW",
|
||||
"blurNsfwContentHelp": "Difuminar imágenes de vista previa de contenido para adultos (NSFW)",
|
||||
"showOnlySfw": "Mostrar solo resultados SFW",
|
||||
"showOnlySfwHelp": "Filtrar todo el contenido NSFW al navegar y buscar"
|
||||
"showOnlySfwHelp": "Filtrar todo el contenido NSFW al navegar y buscar",
|
||||
"matureBlurThreshold": "Umbral de difuminado para contenido adulto",
|
||||
"matureBlurThresholdHelp": "Establecer a partir de qué nivel de clasificación comienza el filtrado por difuminado cuando el difuminado NSFW está habilitado.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 y superior",
|
||||
"r": "R y superior (predeterminado)",
|
||||
"x": "X y superior",
|
||||
"xxx": "Solo XXX"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "Reproducir videos automáticamente al pasar el ratón",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "No se pudieron guardar las rutas a omitir: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "Copias de seguridad automáticas",
|
||||
"autoEnabledHelp": "Crea una instantánea local una vez al día y conserva las más recientes según la política de retención.",
|
||||
"retention": "Cantidad de retención",
|
||||
"retentionHelp": "Cuántas instantáneas automáticas conservar antes de eliminar las antiguas.",
|
||||
"management": "Gestión de copias",
|
||||
"managementHelp": "Exporta tu estado de usuario actual o restáuralo desde un archivo de copia de seguridad.",
|
||||
"scopeHelp": "Incluye tu configuración, el historial de descargas y el estado de actualización de los modelos. No incluye los archivos de modelo ni las cachés que se pueden regenerar.",
|
||||
"locationSummary": "Ubicación actual de la copia",
|
||||
"openFolderButton": "Abrir carpeta de copias",
|
||||
"openFolderSuccess": "Carpeta de copias abierta",
|
||||
"openFolderFailed": "No se pudo abrir la carpeta de copias",
|
||||
"locationCopied": "Ruta de la copia copiada al portapapeles: {{path}}",
|
||||
"locationClipboardFallback": "Ruta de la copia: {{path}}",
|
||||
"exportButton": "Exportar copia",
|
||||
"exportSuccess": "Copia exportada correctamente.",
|
||||
"exportFailed": "No se pudo exportar la copia: {message}",
|
||||
"importButton": "Importar copia",
|
||||
"importConfirm": "¿Importar esta copia y sobrescribir el estado local del usuario?",
|
||||
"importSuccess": "Copia importada correctamente.",
|
||||
"importFailed": "No se pudo importar la copia: {message}",
|
||||
"latestSnapshot": "Última instantánea",
|
||||
"latestAutoSnapshot": "Última instantánea automática",
|
||||
"snapshotCount": "Instantáneas guardadas",
|
||||
"noneAvailable": "Aún no hay instantáneas"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "Omitir descargas para modelos base",
|
||||
"help": "Se aplica a todos los flujos de descarga. Aquí solo se pueden seleccionar modelos base compatibles.",
|
||||
"searchPlaceholder": "Filtrar modelos base...",
|
||||
"empty": "Ningún modelo base coincide con la búsqueda actual.",
|
||||
"summary": {
|
||||
"none": "Ninguno seleccionado",
|
||||
"count": "{count} seleccionados"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "Editar",
|
||||
"collapse": "Contraer",
|
||||
"clear": "Limpiar"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "No se pudieron guardar los modelos base excluidos: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "Omitir versiones de modelos previamente descargadas",
|
||||
"help": "Cuando está habilitado, LoRA Manager omitirá la descarga de una versión de modelo si el servicio de historial de descargas registra esa versión exacta como ya descargada. Aplica a todos los flujos de descarga."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Densidad de visualización",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "Mostrar al pasar el ratón"
|
||||
},
|
||||
"cardInfoDisplayHelp": "Elige cuándo mostrar información del modelo y botones de acción",
|
||||
"showVersionOnCard": "Mostrar versión en la tarjeta",
|
||||
"showVersionOnCardHelp": "Mostrar u ocultar el nombre de versión en las tarjetas de modelo",
|
||||
"modelCardFooterAction": "Acción del botón de tarjeta de modelo",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "Abrir imágenes de ejemplo",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "Establecer el directorio raíz predeterminado de Diffusion Model (UNET) para descargas, importaciones y movimientos",
|
||||
"defaultEmbeddingRoot": "Raíz de embedding",
|
||||
"defaultEmbeddingRootHelp": "Establecer el directorio raíz predeterminado de embedding para descargas, importaciones y movimientos",
|
||||
"recipesPath": "Ruta de almacenamiento de recetas",
|
||||
"recipesPathHelp": "Directorio personalizado opcional para las recetas guardadas. Déjalo vacío para usar la carpeta recipes del primer directorio raíz de LoRA.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "Migrando el almacenamiento de recetas...",
|
||||
"noDefault": "Sin predeterminado"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "Rutas de carpetas adicionales",
|
||||
"help": "Agregue carpetas de modelos adicionales fuera de las rutas estándar de ComfyUI. Estas rutas se almacenan por separado y se escanean junto con las carpetas predeterminadas.",
|
||||
"description": "Configure carpetas adicionales para escanear modelos. Estas rutas son específicas de LoRA Manager y se fusionarán con las rutas predeterminadas de ComfyUI.",
|
||||
"description": "Rutas raíz de modelos adicionales exclusivas para LoRA Manager. Cargue modelos desde ubicaciones fuera de las carpetas estándar de ComfyUI, ideal para bibliotecas grandes que de otro modo ralentizarían ComfyUI.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "Rutas de LoRA",
|
||||
"checkpoint": "Rutas de Checkpoint",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Rutas de Embedding"
|
||||
},
|
||||
"pathPlaceholder": "/ruta/a/modelos/extra",
|
||||
"saveSuccess": "Rutas de carpetas adicionales actualizadas.",
|
||||
"saveSuccess": "Rutas de carpetas adicionales actualizadas. Se requiere reinicio para aplicar los cambios.",
|
||||
"saveError": "Error al actualizar las rutas de carpetas adicionales: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "Esta ruta ya está configurada"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Introduce la ruta de la carpeta donde se guardarán las imágenes de ejemplo de Civitai",
|
||||
"autoDownload": "Descargar automáticamente imágenes de ejemplo",
|
||||
"autoDownloadHelp": "Descargar automáticamente imágenes de ejemplo para modelos que no las tengan (requiere que se establezca la ubicación de descarga)",
|
||||
"openMode": "Acción al abrir imágenes de ejemplo",
|
||||
"openModeHelp": "Elige si la acción se abre en el servidor, copia una ruta local asignada o lanza una URI personalizada.",
|
||||
"openModeOptions": {
|
||||
"system": "Abrir en el servidor",
|
||||
"clipboard": "Copiar ruta local",
|
||||
"uriTemplate": "Abrir URI personalizada"
|
||||
},
|
||||
"localRoot": "Raíz local de imágenes de ejemplo",
|
||||
"localRootHelp": "Raíz local u montada opcional que refleja el directorio de imágenes de ejemplo del servidor. Si se deja en blanco, se reutiliza la ruta del servidor.",
|
||||
"localRootPlaceholder": "Ejemplo: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "Abrir plantilla de URI",
|
||||
"uriTemplateHelp": "Usa un enlace profundo personalizado, como un URI de archivo o un enlace de Shortcuts.",
|
||||
"uriTemplatePlaceholder": "Ejemplo: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "Marcadores disponibles: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "Más información sobre los modos de apertura remota",
|
||||
"optimizeImages": "Optimizar imágenes descargadas",
|
||||
"optimizeImagesHelp": "Optimizar imágenes de ejemplo para reducir el tamaño del archivo y mejorar la velocidad de carga (se preservarán los metadatos)",
|
||||
"download": "Descargar",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "Auto-organizar seleccionados",
|
||||
"skipMetadataRefresh": "Omitir actualización de metadatos para seleccionados",
|
||||
"resumeMetadataRefresh": "Reanudar actualización de metadatos para seleccionados",
|
||||
"deleteAll": "Eliminar todos los modelos",
|
||||
"deleteAll": "Eliminar seleccionados",
|
||||
"downloadMissingLoras": "Descargar LoRAs faltantes",
|
||||
"clear": "Limpiar selección",
|
||||
"skipMetadataRefreshCount": "Omitir({count} modelos)",
|
||||
"resumeMetadataRefreshCount": "Reanudar({count} modelos)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "Mover a carpeta",
|
||||
"repairMetadata": "Reparar metadatos",
|
||||
"excludeModel": "Excluir modelo",
|
||||
"restoreModel": "Restaurar modelo",
|
||||
"deleteModel": "Eliminar modelo",
|
||||
"shareRecipe": "Compartir receta",
|
||||
"viewAllLoras": "Ver todos los LoRAs",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "Mover a la carpeta {otherType}"
|
||||
"moveToOtherTypeFolder": "Mover a la carpeta {otherType}",
|
||||
"sendToWorkflow": "Enviar al flujo de trabajo"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "Desfijar barra lateral",
|
||||
"switchToListView": "Cambiar a vista de lista",
|
||||
"switchToTreeView": "Cambiar a vista de árbol",
|
||||
"recursiveOn": "Buscar en subcarpetas",
|
||||
"recursiveOff": "Buscar solo en la carpeta actual",
|
||||
"recursiveOn": "Incluir subcarpetas",
|
||||
"recursiveOff": "Solo carpeta actual",
|
||||
"recursiveUnavailable": "La búsqueda recursiva solo está disponible en la vista en árbol",
|
||||
"collapseAllDisabled": "No disponible en vista de lista",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "Acceso temprano",
|
||||
"earlyAccessTooltip": "Acceso temprano requerido",
|
||||
"inLibrary": "En la biblioteca",
|
||||
"downloaded": "Descargado",
|
||||
"downloadedTooltip": "Descargado anteriormente, pero actualmente no está en tu biblioteca.",
|
||||
"alreadyInLibrary": "Ya en la biblioteca",
|
||||
"autoOrganizedPath": "[Auto-organizado por plantilla de ruta]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "Actualizar modelo base",
|
||||
"cancel": "Cancelar"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "Descargar LoRAs faltantes",
|
||||
"message": "Se encontraron {uniqueCount} LoRAs faltantes únicos (de {totalCount} en total entre las recetas seleccionadas).",
|
||||
"previewTitle": "LoRAs para descargar:",
|
||||
"moreItems": "...y {count} más",
|
||||
"note": "Los archivos se descargarán usando las plantillas de ruta predeterminadas. Esto puede tomar un tiempo dependiendo del número de LoRAs.",
|
||||
"downloadButton": "Descargar {count} LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "Imágenes de ejemplo locales",
|
||||
"message": "No se encontraron imágenes de ejemplo locales para este modelo. Opciones de visualización:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Ver en Civitai",
|
||||
"viewOnCivitaiText": "Ver en Civitai",
|
||||
"viewCreatorProfile": "Ver perfil del creador",
|
||||
"openFileLocation": "Abrir ubicación del archivo"
|
||||
"openFileLocation": "Abrir ubicación del archivo",
|
||||
"sendToWorkflow": "Enviar a ComfyUI",
|
||||
"sendToWorkflowText": "Enviar a ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "Ubicación del archivo abierta exitosamente",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "Ruta copiada al portapapeles: {{path}}",
|
||||
"clipboardFallback": "Ruta: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "No se puede enviar a ComfyUI: no hay ruta de archivo disponible"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "Versión",
|
||||
"fileName": "Nombre de archivo",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "Cancelar edición",
|
||||
"save": "Guardar cambios",
|
||||
"addPlaceholder": "Escribe para añadir o haz clic en sugerencias de abajo",
|
||||
"editWord": "Editar palabra de activación",
|
||||
"editPlaceholder": "Editar palabra de activación",
|
||||
"copyWord": "Copiar palabra clave",
|
||||
"deleteWord": "Eliminar palabra clave",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "en {count}d"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Versión actual",
|
||||
"current": "Versión abierta",
|
||||
"currentTooltip": "Es la versión con la que abriste este modal",
|
||||
"inLibrary": "En la biblioteca",
|
||||
"inLibraryTooltip": "Esta versión existe en tu biblioteca local",
|
||||
"downloaded": "Descargado",
|
||||
"downloadedTooltip": "Esta versión se descargó antes, pero ahora no está en tu biblioteca",
|
||||
"newer": "Versión más reciente",
|
||||
"newerTooltip": "Esta versión es más reciente que tu última versión local",
|
||||
"earlyAccess": "Acceso temprano",
|
||||
"ignored": "Ignorada"
|
||||
"earlyAccessTooltip": "Esta versión requiere actualmente acceso temprano de Civitai",
|
||||
"ignored": "Ignorada",
|
||||
"ignoredTooltip": "Las notificaciones de actualización están desactivadas para esta versión",
|
||||
"onSiteOnly": "Solo en Sitio",
|
||||
"onSiteOnlyTooltip": "Esta versión solo está disponible para generación en el sitio de Civitai"
|
||||
},
|
||||
"actions": {
|
||||
"download": "Descargar",
|
||||
"downloadTooltip": "Descargar esta versión",
|
||||
"downloadEarlyAccessTooltip": "Descargar esta versión de acceso temprano desde Civitai",
|
||||
"downloadNotAllowedTooltip": "Esta versión solo está disponible para generación en el sitio de Civitai",
|
||||
"delete": "Eliminar",
|
||||
"deleteTooltip": "Eliminar esta versión local",
|
||||
"ignore": "Ignorar",
|
||||
"unignore": "Dejar de ignorar",
|
||||
"ignoreTooltip": "Ignorar las notificaciones de actualización de esta versión",
|
||||
"unignoreTooltip": "Reanudar las notificaciones de actualización de esta versión",
|
||||
"viewVersionOnCivitai": "Ver versión en Civitai",
|
||||
"earlyAccessTooltip": "Requiere compra de acceso temprano",
|
||||
"resumeModelUpdates": "Reanudar actualizaciones para este modelo",
|
||||
"ignoreModelUpdates": "Ignorar actualizaciones para este modelo",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "Receta reemplazada en el flujo de trabajo",
|
||||
"recipeFailedToSend": "Error al enviar receta al flujo de trabajo",
|
||||
"noMatchingNodes": "No hay nodos compatibles disponibles en el flujo de trabajo actual",
|
||||
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino"
|
||||
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino",
|
||||
"modelUpdated": "Modelo actualizado en el flujo de trabajo",
|
||||
"modelFailed": "Error al actualizar nodo de modelo"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Receta",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "Carpeta de imágenes de ejemplo abierta",
|
||||
"openingFolder": "Abriendo carpeta de imágenes de ejemplo",
|
||||
"failedToOpen": "Error al abrir carpeta de imágenes de ejemplo",
|
||||
"copiedPath": "Ruta copiada al portapapeles: {{path}}",
|
||||
"clipboardFallback": "Ruta: {{path}}",
|
||||
"copiedUri": "Enlace copiado al portapapeles: {{uri}}",
|
||||
"uriClipboardFallback": "Enlace: {{uri}}",
|
||||
"setupRequired": "Almacenamiento de imágenes de ejemplo",
|
||||
"setupDescription": "Para agregar imágenes de ejemplo personalizadas, primero necesita establecer una ubicación de descarga.",
|
||||
"setupUsage": "Esta ruta se utiliza tanto para imágenes de ejemplo descargadas como personalizadas.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "Por favor selecciona una versión",
|
||||
"versionExists": "Esta versión ya existe en tu biblioteca",
|
||||
"downloadCompleted": "Descarga completada exitosamente",
|
||||
"downloadSkippedByBaseModel": "Descarga omitida porque el modelo base {baseModel} está excluido",
|
||||
"autoOrganizeSuccess": "Auto-organización completada exitosamente para {count} {type}",
|
||||
"autoOrganizePartialSuccess": "Auto-organización completada con {success} movidos, {failures} fallidos de un total de {total} modelos",
|
||||
"autoOrganizeFailed": "Auto-organización fallida: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "Nombre de receta actualizado exitosamente",
|
||||
"tagsUpdated": "Etiquetas de receta actualizadas exitosamente",
|
||||
"sourceUrlUpdated": "URL de origen actualizada exitosamente",
|
||||
"promptUpdated": "Prompt actualizado exitosamente",
|
||||
"negativePromptUpdated": "Prompt negativo actualizado exitosamente",
|
||||
"promptEditorHint": "Presiona Enter para guardar, Shift+Enter para nueva línea",
|
||||
"noRecipeId": "No hay ID de receta disponible",
|
||||
"sendToWorkflowFailed": "Error al enviar la receta al flujo de trabajo: {message}",
|
||||
"copyFailed": "Error copiando sintaxis de receta: {message}",
|
||||
"noMissingLoras": "No hay LoRAs faltantes para descargar",
|
||||
"missingLorasInfoFailed": "Error al obtener información de LoRAs faltantes",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "No se han seleccionado recetas",
|
||||
"noMissingLorasInSelection": "No se encontraron LoRAs faltantes en las recetas seleccionadas",
|
||||
"noLoraRootConfigured": "No se ha configurado el directorio raíz de LoRA. Por favor, establezca un directorio raíz de LoRA predeterminado en la configuración."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "No hay modelos seleccionados",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "Error al guardar mapeos de modelo base: {message}",
|
||||
"downloadTemplatesUpdated": "Plantillas de rutas de descarga actualizadas",
|
||||
"downloadTemplatesFailed": "Error al guardar plantillas de rutas de descarga: {message}",
|
||||
"recipesPathUpdated": "Ruta de almacenamiento de recetas actualizada",
|
||||
"recipesPathSaveFailed": "Error al actualizar la ruta de almacenamiento de recetas: {message}",
|
||||
"settingsUpdated": "Configuración actualizada: {setting}",
|
||||
"compactModeToggled": "Modo compacto {state}",
|
||||
"settingSaveFailed": "Error al guardar configuración: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "No se pudieron cargar palabras entrenadas",
|
||||
"tooLong": "La palabra clave no debe exceder 100 palabras",
|
||||
"tooMany": "Máximo 30 palabras clave permitidas",
|
||||
"tooLong": "La palabra clave no debe exceder 500 palabras",
|
||||
"tooMany": "Máximo 100 palabras clave permitidas",
|
||||
"alreadyExists": "Esta palabra clave ya existe",
|
||||
"updateSuccess": "Palabras clave actualizadas exitosamente",
|
||||
"updateFailed": "Error al actualizar palabras clave",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "Error al eliminar {type}: {message}",
|
||||
"excludeSuccess": "{type} excluido exitosamente",
|
||||
"excludeFailed": "Error al excluir {type}: {message}",
|
||||
"restoreSuccess": "{type} restaurado correctamente",
|
||||
"restoreFailed": "No se pudo restaurar {type}: {message}",
|
||||
"fileNameUpdated": "Nombre de archivo actualizado exitosamente",
|
||||
"fileRenameFailed": "Error al renombrar archivo: {error}",
|
||||
"previewUpdated": "Vista previa actualizada exitosamente",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "Diagnósticos del sistema",
|
||||
"title": "Doctor",
|
||||
"buttonTitle": "Ejecutar diagnósticos y correcciones comunes",
|
||||
"loading": "Comprobando el entorno...",
|
||||
"footer": "Exporta un paquete de diagnóstico si el problema persiste después de la reparación.",
|
||||
"summary": {
|
||||
"idle": "Ejecuta una comprobación del estado de la configuración, la integridad de la caché y la coherencia de la interfaz.",
|
||||
"ok": "No se encontraron problemas activos en el entorno actual.",
|
||||
"warning": "Se encontraron {count} problema(s). La mayoría se puede solucionar directamente desde este panel.",
|
||||
"error": "Se encontraron {count} problema(s). Deben atenderse antes de que la aplicación esté completamente saludable."
|
||||
},
|
||||
"status": {
|
||||
"ok": "Saludable",
|
||||
"warning": "Requiere atención",
|
||||
"error": "Se requiere acción"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "Ejecutar de nuevo",
|
||||
"exportBundle": "Exportar paquete"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "Error al cargar los diagnósticos: {message}",
|
||||
"repairSuccess": "Reconstrucción de caché completada.",
|
||||
"repairFailed": "Error al reconstruir la caché: {message}",
|
||||
"exportSuccess": "Paquete de diagnósticos exportado.",
|
||||
"exportFailed": "Error al exportar el paquete de diagnósticos: {message}",
|
||||
"conflictsResolved": "{count} conflicto(s) de nombre de archivo resuelto(s).",
|
||||
"conflictsResolveFailed": "Error al resolver conflictos de nombre de archivo: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "Actualización de la aplicación detectada",
|
||||
|
||||
230
locales/fr.json
230
locales/fr.json
@@ -15,7 +15,8 @@
|
||||
"settings": "Paramètres",
|
||||
"help": "Aide",
|
||||
"add": "Ajouter",
|
||||
"close": "Fermer"
|
||||
"close": "Fermer",
|
||||
"menu": "Menu"
|
||||
},
|
||||
"status": {
|
||||
"loading": "Chargement...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "{count} recettes réparées avec succès.",
|
||||
"cancelled": "Réparation annulée. {count} recettes ont été réparées.",
|
||||
"error": "Échec de la réparation des recettes : {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "Gérer les modèles exclus"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "Le préréglage \"{name}\" existe déjà. Remplacer?",
|
||||
"presetNamePlaceholder": "Nom du préréglage...",
|
||||
"baseModel": "Modèle de base",
|
||||
"baseModelSearchPlaceholder": "Rechercher des modèles de base...",
|
||||
"modelTags": "Tags (Top 20)",
|
||||
"modelTypes": "Types de modèles",
|
||||
"license": "Licence",
|
||||
"noCreditRequired": "Crédit non requis",
|
||||
"allowSellingGeneratedContent": "Vente autorisée",
|
||||
"noTags": "Aucun tag",
|
||||
"noBaseModelMatches": "Aucun modèle de base ne correspond à la recherche actuelle.",
|
||||
"clearAll": "Effacer tous les filtres",
|
||||
"any": "N'importe quel",
|
||||
"all": "Tous",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Clé API Civitai",
|
||||
"civitaiApiKeyPlaceholder": "Entrez votre clé API Civitai",
|
||||
"civitaiApiKeyHelp": "Utilisée pour l'authentification lors du téléchargement de modèles depuis Civitai",
|
||||
"civitaiHost": {
|
||||
"label": "Hôte Civitai",
|
||||
"help": "Choisissez quel site Civitai s'ouvre lorsque vous utilisez les liens « View on Civitai ».",
|
||||
"options": {
|
||||
"com": "civitai.com (SFW uniquement)",
|
||||
"red": "civitai.red (sans restriction)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "Moteur de téléchargement",
|
||||
"help": "Choisissez comment les fichiers de modèles sont téléchargés. Python utilise le téléchargeur intégré. aria2 utilise le processus externe expérimental de téléchargement.",
|
||||
"options": {
|
||||
"python": "Python (intégré)",
|
||||
"aria2": "aria2 (expérimental)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "Chemin vers aria2c",
|
||||
"help": "Chemin facultatif vers l’exécutable aria2c. Laissez vide pour utiliser aria2c depuis le PATH système.",
|
||||
"placeholder": "Laisser vide pour utiliser aria2c depuis le PATH"
|
||||
},
|
||||
"aria2HelpLink": "Apprenez à configurer le backend de téléchargement aria2",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Préférence d’hôte Civitai disponible",
|
||||
"content": "Civitai utilise désormais civitai.com pour le contenu SFW et civitai.red pour le contenu sans restriction. Vous pouvez modifier dans les paramètres le site ouvert par défaut.",
|
||||
"openSettings": "Ouvrir les paramètres"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "Ouvrir le dossier des paramètres",
|
||||
"tooltip": "Ouvrir le dossier contenant settings.json",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "Filtrage du contenu",
|
||||
"downloads": "Téléchargements",
|
||||
"videoSettings": "Paramètres vidéo",
|
||||
"layoutSettings": "Paramètres d'affichage",
|
||||
"misc": "Divers",
|
||||
"backup": "Sauvegardes",
|
||||
"folderSettings": "Racines par défaut",
|
||||
"recipeSettings": "Recipes",
|
||||
"extraFolderPaths": "Chemins de dossiers supplémentaires",
|
||||
"downloadPathTemplates": "Modèles de chemin de téléchargement",
|
||||
"priorityTags": "Étiquettes prioritaires",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "Flouter le contenu NSFW",
|
||||
"blurNsfwContentHelp": "Flouter les images d'aperçu de contenu pour adultes (NSFW)",
|
||||
"showOnlySfw": "Afficher uniquement les résultats SFW",
|
||||
"showOnlySfwHelp": "Filtrer tout le contenu NSFW lors de la navigation et de la recherche"
|
||||
"showOnlySfwHelp": "Filtrer tout le contenu NSFW lors de la navigation et de la recherche",
|
||||
"matureBlurThreshold": "Seuil de floutage pour contenu adulte",
|
||||
"matureBlurThresholdHelp": "Définir à partir de quel niveau de classification le floutage s'applique lorsque le floutage NSFW est activé.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 et plus",
|
||||
"r": "R et plus (par défaut)",
|
||||
"x": "X et plus",
|
||||
"xxx": "XXX uniquement"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "Lecture automatique vidéo au survol",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "Impossible d'enregistrer les chemins à ignorer : {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "Sauvegardes automatiques",
|
||||
"autoEnabledHelp": "Crée un instantané local une fois par jour et conserve les plus récents selon la politique de rétention.",
|
||||
"retention": "Nombre de rétention",
|
||||
"retentionHelp": "Combien d'instantanés automatiques conserver avant de supprimer les plus anciens.",
|
||||
"management": "Gestion des sauvegardes",
|
||||
"managementHelp": "Exporte l'état actuel de l'utilisateur ou restaure-le depuis une archive de sauvegarde.",
|
||||
"scopeHelp": "Inclut vos paramètres, l'historique des téléchargements et l'état des mises à jour des modèles. Les fichiers de modèle et les caches régénérables ne sont pas inclus.",
|
||||
"locationSummary": "Emplacement actuel des sauvegardes",
|
||||
"openFolderButton": "Ouvrir le dossier de sauvegarde",
|
||||
"openFolderSuccess": "Dossier de sauvegarde ouvert",
|
||||
"openFolderFailed": "Impossible d'ouvrir le dossier de sauvegarde",
|
||||
"locationCopied": "Chemin de sauvegarde copié dans le presse-papiers : {{path}}",
|
||||
"locationClipboardFallback": "Chemin de sauvegarde : {{path}}",
|
||||
"exportButton": "Exporter la sauvegarde",
|
||||
"exportSuccess": "Sauvegarde exportée avec succès.",
|
||||
"exportFailed": "Échec de l'export de la sauvegarde : {message}",
|
||||
"importButton": "Importer la sauvegarde",
|
||||
"importConfirm": "Importer cette sauvegarde et écraser l'état local de l'utilisateur ?",
|
||||
"importSuccess": "Sauvegarde importée avec succès.",
|
||||
"importFailed": "Échec de l'import de la sauvegarde : {message}",
|
||||
"latestSnapshot": "Dernier instantané",
|
||||
"latestAutoSnapshot": "Dernier instantané automatique",
|
||||
"snapshotCount": "Instantanés enregistrés",
|
||||
"noneAvailable": "Aucun instantané pour le moment"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "Ignorer les téléchargements pour certains modèles de base",
|
||||
"help": "S’applique à tous les flux de téléchargement. Seuls les modèles de base pris en charge peuvent être sélectionnés ici.",
|
||||
"searchPlaceholder": "Filtrer les modèles de base...",
|
||||
"empty": "Aucun modèle de base ne correspond à la recherche actuelle.",
|
||||
"summary": {
|
||||
"none": "Aucune sélection",
|
||||
"count": "{count} sélectionnés"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "Modifier",
|
||||
"collapse": "Réduire",
|
||||
"clear": "Effacer"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "Impossible d’enregistrer les modèles de base exclus : {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "Ignorer les versions de modèles précédemment téléchargées",
|
||||
"help": "Lorsque activé, LoRA Manager ignorera le téléchargement d'une version de modèle si le service d'historique des téléchargements enregistre cette version exacte comme déjà téléchargée. S'applique à tous les flux de téléchargement."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Densité d'affichage",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "Révéler au survol"
|
||||
},
|
||||
"cardInfoDisplayHelp": "Choisissez quand afficher les informations du modèle et les boutons d'action",
|
||||
"showVersionOnCard": "Afficher la version sur la carte",
|
||||
"showVersionOnCardHelp": "Afficher ou masquer le nom de version sur les cartes de modèle",
|
||||
"modelCardFooterAction": "Action du bouton de carte de modèle",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "Ouvrir les images d'exemple",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "Définir le répertoire racine Diffusion Model (UNET) par défaut pour les téléchargements, imports et déplacements",
|
||||
"defaultEmbeddingRoot": "Racine Embedding",
|
||||
"defaultEmbeddingRootHelp": "Définir le répertoire racine embedding par défaut pour les téléchargements, imports et déplacements",
|
||||
"recipesPath": "Recipes Storage Path",
|
||||
"recipesPathHelp": "Optional custom directory for stored recipes. Leave empty to use the first LoRA root's recipes folder.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "Migrating recipes storage...",
|
||||
"noDefault": "Aucun par défaut"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "Chemins de dossiers supplémentaires",
|
||||
"help": "Ajoutez des dossiers de modèles supplémentaires en dehors des chemins standard de ComfyUI. Ces chemins sont stockés séparément et analysés aux côtés des dossiers par défaut.",
|
||||
"description": "Configurez des dossiers supplémentaires pour l'analyse de modèles. Ces chemins sont spécifiques à LoRA Manager et seront fusionnés avec les chemins par défaut de ComfyUI.",
|
||||
"description": "Chemins racine de modèles supplémentaires exclusifs à LoRA Manager. Chargez des modèles depuis des emplacements en dehors des dossiers standard de ComfyUI, idéal pour les grandes bibliothèques qui ralentiraient autrement ComfyUI.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "Chemins LoRA",
|
||||
"checkpoint": "Chemins Checkpoint",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Chemins Embedding"
|
||||
},
|
||||
"pathPlaceholder": "/chemin/vers/modèles/supplémentaires",
|
||||
"saveSuccess": "Chemins de dossiers supplémentaires mis à jour.",
|
||||
"saveSuccess": "Chemins de dossiers supplémentaires mis à jour. Redémarrage requis pour appliquer les changements.",
|
||||
"saveError": "Échec de la mise à jour des chemins de dossiers supplémentaires: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "Ce chemin est déjà configuré"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Entrez le chemin du dossier où les images d'exemple de Civitai seront sauvegardées",
|
||||
"autoDownload": "Téléchargement automatique des images d'exemple",
|
||||
"autoDownloadHelp": "Télécharger automatiquement les images d'exemple pour les modèles qui n'en ont pas (nécessite que l'emplacement de téléchargement soit défini)",
|
||||
"openMode": "Action d’ouverture des images d’exemple",
|
||||
"openModeHelp": "Choisissez si l’action s’ouvre sur le serveur, copie un chemin local mappé ou lance une URI personnalisée.",
|
||||
"openModeOptions": {
|
||||
"system": "Ouvrir sur le serveur",
|
||||
"clipboard": "Copier le chemin local",
|
||||
"uriTemplate": "Ouvrir une URI personnalisée"
|
||||
},
|
||||
"localRoot": "Racine locale des images d’exemple",
|
||||
"localRootHelp": "Racine locale ou montée facultative qui reflète le répertoire des images d’exemple du serveur. Si vide, le chemin du serveur est réutilisé.",
|
||||
"localRootPlaceholder": "Exemple : /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "Ouvrir le modèle d’URI",
|
||||
"uriTemplateHelp": "Utilisez un lien profond personnalisé, tel qu’une URI de fichier ou un lien Shortcuts.",
|
||||
"uriTemplatePlaceholder": "Exemple : shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "Paramètres disponibles : {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "En savoir plus sur les modes d'ouverture à distance",
|
||||
"optimizeImages": "Optimiser les images téléchargées",
|
||||
"optimizeImagesHelp": "Optimiser les images d'exemple pour réduire la taille du fichier et améliorer la vitesse de chargement (les métadonnées seront préservées)",
|
||||
"download": "Télécharger",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "Auto-organiser la sélection",
|
||||
"skipMetadataRefresh": "Ignorer l'actualisation des métadonnées pour la sélection",
|
||||
"resumeMetadataRefresh": "Reprendre l'actualisation des métadonnées pour la sélection",
|
||||
"deleteAll": "Supprimer tous les modèles",
|
||||
"deleteAll": "Supprimer la sélection",
|
||||
"downloadMissingLoras": "Télécharger les LoRAs manquants",
|
||||
"clear": "Effacer la sélection",
|
||||
"skipMetadataRefreshCount": "Ignorer({count} modèles)",
|
||||
"resumeMetadataRefreshCount": "Reprendre({count} modèles)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "Déplacer vers un dossier",
|
||||
"repairMetadata": "Réparer les métadonnées",
|
||||
"excludeModel": "Exclure le modèle",
|
||||
"restoreModel": "Restaurer le modèle",
|
||||
"deleteModel": "Supprimer le modèle",
|
||||
"shareRecipe": "Partager la recipe",
|
||||
"viewAllLoras": "Voir tous les LoRAs",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "Déplacer vers le dossier {otherType}"
|
||||
"moveToOtherTypeFolder": "Déplacer vers le dossier {otherType}",
|
||||
"sendToWorkflow": "Envoyer vers le workflow"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "Désépingler la barre latérale",
|
||||
"switchToListView": "Passer en vue liste",
|
||||
"switchToTreeView": "Passer en vue arborescence",
|
||||
"recursiveOn": "Rechercher dans les sous-dossiers",
|
||||
"recursiveOff": "Rechercher uniquement dans le dossier actuel",
|
||||
"recursiveOn": "Inclure les sous-dossiers",
|
||||
"recursiveOff": "Dossier actuel uniquement",
|
||||
"recursiveUnavailable": "La recherche récursive n'est disponible qu'en vue arborescente",
|
||||
"collapseAllDisabled": "Non disponible en vue liste",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "Accès anticipé",
|
||||
"earlyAccessTooltip": "Accès anticipé requis",
|
||||
"inLibrary": "Dans la bibliothèque",
|
||||
"downloaded": "Téléchargé",
|
||||
"downloadedTooltip": "Déjà téléchargé, mais il n'est actuellement pas dans votre bibliothèque.",
|
||||
"alreadyInLibrary": "Déjà dans la bibliothèque",
|
||||
"autoOrganizedPath": "[Auto-organisé par modèle de chemin]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "Mettre à jour le modèle de base",
|
||||
"cancel": "Annuler"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "Télécharger les LoRAs manquants",
|
||||
"message": "{uniqueCount} LoRAs manquants uniques trouvés (sur un total de {totalCount} dans les recettes sélectionnées).",
|
||||
"previewTitle": "LoRAs à télécharger :",
|
||||
"moreItems": "...et {count} de plus",
|
||||
"note": "Les fichiers seront téléchargés en utilisant les modèles de chemins par défaut. Cela peut prendre un certain temps selon le nombre de LoRAs.",
|
||||
"downloadButton": "Télécharger {count} LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "Images d'exemple locales",
|
||||
"message": "Aucune image d'exemple locale trouvée pour ce modèle. Options d'affichage :",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Voir sur Civitai",
|
||||
"viewOnCivitaiText": "Voir sur Civitai",
|
||||
"viewCreatorProfile": "Voir le profil du créateur",
|
||||
"openFileLocation": "Ouvrir l'emplacement du fichier"
|
||||
"openFileLocation": "Ouvrir l'emplacement du fichier",
|
||||
"sendToWorkflow": "Envoyer vers ComfyUI",
|
||||
"sendToWorkflowText": "Envoyer vers ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "Emplacement du fichier ouvert avec succès",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "Chemin copié dans le presse-papiers: {{path}}",
|
||||
"clipboardFallback": "Chemin: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "Impossible d'envoyer vers ComfyUI : aucun chemin de fichier disponible"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "Version",
|
||||
"fileName": "Nom de fichier",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "Annuler la modification",
|
||||
"save": "Sauvegarder les modifications",
|
||||
"addPlaceholder": "Tapez pour ajouter ou cliquez sur les suggestions ci-dessous",
|
||||
"editWord": "Modifier le mot déclencheur",
|
||||
"editPlaceholder": "Modifier le mot déclencheur",
|
||||
"copyWord": "Copier le mot-clé",
|
||||
"deleteWord": "Supprimer le mot-clé",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "dans {count}j"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Version actuelle",
|
||||
"current": "Version ouverte",
|
||||
"currentTooltip": "C'est la version à partir de laquelle cette fenêtre a été ouverte",
|
||||
"inLibrary": "Dans la bibliothèque",
|
||||
"inLibraryTooltip": "Cette version existe dans votre bibliothèque locale",
|
||||
"downloaded": "Téléchargé",
|
||||
"downloadedTooltip": "Cette version a déjà été téléchargée, mais n'est pas actuellement dans votre bibliothèque",
|
||||
"newer": "Version plus récente",
|
||||
"newerTooltip": "Cette version est plus récente que votre dernière version locale",
|
||||
"earlyAccess": "Accès anticipé",
|
||||
"ignored": "Ignorée"
|
||||
"earlyAccessTooltip": "Cette version nécessite actuellement l'accès anticipé Civitai",
|
||||
"ignored": "Ignorée",
|
||||
"ignoredTooltip": "Les notifications de mise à jour sont désactivées pour cette version",
|
||||
"onSiteOnly": "Uniquement sur Site",
|
||||
"onSiteOnlyTooltip": "Cette version n'est disponible que pour la génération sur le site Civitai"
|
||||
},
|
||||
"actions": {
|
||||
"download": "Télécharger",
|
||||
"downloadTooltip": "Télécharger cette version",
|
||||
"downloadEarlyAccessTooltip": "Télécharger cette version en accès anticipé depuis Civitai",
|
||||
"downloadNotAllowedTooltip": "Cette version n'est disponible que pour la génération sur le site Civitai",
|
||||
"delete": "Supprimer",
|
||||
"deleteTooltip": "Supprimer cette version locale",
|
||||
"ignore": "Ignorer",
|
||||
"unignore": "Ne plus ignorer",
|
||||
"ignoreTooltip": "Ignorer les notifications de mise à jour pour cette version",
|
||||
"unignoreTooltip": "Reprendre les notifications de mise à jour pour cette version",
|
||||
"viewVersionOnCivitai": "Voir la version sur Civitai",
|
||||
"earlyAccessTooltip": "Nécessite l'achat de l'accès anticipé",
|
||||
"resumeModelUpdates": "Reprendre les mises à jour pour ce modèle",
|
||||
"ignoreModelUpdates": "Ignorer les mises à jour pour ce modèle",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "Recipe remplacée dans le workflow",
|
||||
"recipeFailedToSend": "Échec de l'envoi de la recipe au workflow",
|
||||
"noMatchingNodes": "Aucun nœud compatible disponible dans le workflow actuel",
|
||||
"noTargetNodeSelected": "Aucun nœud cible sélectionné"
|
||||
"noTargetNodeSelected": "Aucun nœud cible sélectionné",
|
||||
"modelUpdated": "Modèle mis à jour dans le workflow",
|
||||
"modelFailed": "Échec de la mise à jour du nœud modèle"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Recipe",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "Dossier d'images d'exemple ouvert",
|
||||
"openingFolder": "Ouverture du dossier d'images d'exemple",
|
||||
"failedToOpen": "Échec de l'ouverture du dossier d'images d'exemple",
|
||||
"copiedPath": "Chemin copié dans le presse-papiers : {{path}}",
|
||||
"clipboardFallback": "Chemin : {{path}}",
|
||||
"copiedUri": "Lien copié dans le presse-papiers : {{uri}}",
|
||||
"uriClipboardFallback": "Lien : {{uri}}",
|
||||
"setupRequired": "Stockage d'images d'exemple",
|
||||
"setupDescription": "Pour ajouter des images d'exemple personnalisées, vous devez d'abord définir un emplacement de téléchargement.",
|
||||
"setupUsage": "Ce chemin est utilisé pour les images d'exemple téléchargées et personnalisées.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "Veuillez sélectionner une version",
|
||||
"versionExists": "Cette version existe déjà dans votre bibliothèque",
|
||||
"downloadCompleted": "Téléchargement terminé avec succès",
|
||||
"downloadSkippedByBaseModel": "Téléchargement ignoré, car le modèle de base {baseModel} est exclu",
|
||||
"autoOrganizeSuccess": "Auto-organisation terminée avec succès pour {count} {type}",
|
||||
"autoOrganizePartialSuccess": "Auto-organisation terminée avec {success} déplacés, {failures} échecs sur {total} modèles",
|
||||
"autoOrganizeFailed": "Échec de l'auto-organisation : {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "Nom de la recipe mis à jour avec succès",
|
||||
"tagsUpdated": "Tags de la recipe mis à jour avec succès",
|
||||
"sourceUrlUpdated": "URL source mise à jour avec succès",
|
||||
"promptUpdated": "Prompt mis à jour avec succès",
|
||||
"negativePromptUpdated": "Prompt négatif mis à jour avec succès",
|
||||
"promptEditorHint": "Appuyez sur Entrée pour sauvegarder, Maj+Entrée pour nouvelle ligne",
|
||||
"noRecipeId": "Aucun ID de recipe disponible",
|
||||
"sendToWorkflowFailed": "Échec de l'envoi de la recette vers le workflow : {message}",
|
||||
"copyFailed": "Erreur lors de la copie de la syntaxe de la recipe : {message}",
|
||||
"noMissingLoras": "Aucun LoRA manquant à télécharger",
|
||||
"missingLorasInfoFailed": "Échec de l'obtention des informations pour les LoRAs manquants",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "Aucune recette sélectionnée",
|
||||
"noMissingLorasInSelection": "Aucun LoRA manquant trouvé dans les recettes sélectionnées",
|
||||
"noLoraRootConfigured": "Aucun répertoire racine LoRA configuré. Veuillez définir un répertoire racine LoRA par défaut dans les paramètres."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "Aucun modèle sélectionné",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "Échec de la sauvegarde des mappages de modèle de base : {message}",
|
||||
"downloadTemplatesUpdated": "Modèles de chemin de téléchargement mis à jour",
|
||||
"downloadTemplatesFailed": "Échec de la sauvegarde des modèles de chemin de téléchargement : {message}",
|
||||
"recipesPathUpdated": "Recipes storage path updated",
|
||||
"recipesPathSaveFailed": "Failed to update recipes storage path: {message}",
|
||||
"settingsUpdated": "Paramètres mis à jour : {setting}",
|
||||
"compactModeToggled": "Mode compact {state}",
|
||||
"settingSaveFailed": "Échec de la sauvegarde du paramètre : {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "Impossible de charger les mots entraînés",
|
||||
"tooLong": "Le mot-clé ne doit pas dépasser 100 mots",
|
||||
"tooMany": "Maximum 30 mots-clés autorisés",
|
||||
"tooLong": "Le mot-clé ne doit pas dépasser 500 mots",
|
||||
"tooMany": "Maximum 100 mots-clés autorisés",
|
||||
"alreadyExists": "Ce mot-clé existe déjà",
|
||||
"updateSuccess": "Mots-clés mis à jour avec succès",
|
||||
"updateFailed": "Échec de la mise à jour des mots-clés",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "Échec de la suppression de {type} : {message}",
|
||||
"excludeSuccess": "{type} exclu avec succès",
|
||||
"excludeFailed": "Échec de l'exclusion de {type} : {message}",
|
||||
"restoreSuccess": "{type} restauré avec succès",
|
||||
"restoreFailed": "Échec de la restauration de {type} : {message}",
|
||||
"fileNameUpdated": "Nom de fichier mis à jour avec succès",
|
||||
"fileRenameFailed": "Échec du renommage du fichier : {error}",
|
||||
"previewUpdated": "Aperçu mis à jour avec succès",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "Diagnostics système",
|
||||
"title": "Docteur",
|
||||
"buttonTitle": "Lancer les diagnostics et les corrections courantes",
|
||||
"loading": "Vérification de l'environnement...",
|
||||
"footer": "Exportez un lot de diagnostic si le problème persiste après la réparation.",
|
||||
"summary": {
|
||||
"idle": "Lancez une vérification de l'état des paramètres, de l'intégrité du cache et de la cohérence de l'interface.",
|
||||
"ok": "Aucun problème actif n'a été trouvé dans l'environnement actuel.",
|
||||
"warning": "{count} problème(s) ont été trouvés. La plupart peuvent être corrigés directement depuis ce panneau.",
|
||||
"error": "{count} problème(s) nécessitent une attention avant que l'application soit entièrement saine."
|
||||
},
|
||||
"status": {
|
||||
"ok": "Sain",
|
||||
"warning": "Nécessite une attention",
|
||||
"error": "Action requise"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "Relancer",
|
||||
"exportBundle": "Exporter le lot"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "Échec du chargement des diagnostics : {message}",
|
||||
"repairSuccess": "Reconstruction du cache terminée.",
|
||||
"repairFailed": "Échec de la reconstruction du cache : {message}",
|
||||
"exportSuccess": "Lot de diagnostics exporté.",
|
||||
"exportFailed": "Échec de l'export du lot de diagnostics : {message}",
|
||||
"conflictsResolved": "{count} conflit(s) de nom de fichier résolu(s).",
|
||||
"conflictsResolveFailed": "Échec de la résolution des conflits de nom de fichier : {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "Mise à jour de l'application détectée",
|
||||
|
||||
230
locales/he.json
230
locales/he.json
@@ -15,7 +15,8 @@
|
||||
"settings": "הגדרות",
|
||||
"help": "עזרה",
|
||||
"add": "הוספה",
|
||||
"close": "סגור"
|
||||
"close": "סגור",
|
||||
"menu": "תפריט"
|
||||
},
|
||||
"status": {
|
||||
"loading": "טוען...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "תוקנו בהצלחה {count} מתכונים.",
|
||||
"cancelled": "תיקון בוטל. {count} מתכונים תוקנו.",
|
||||
"error": "תיקון המתכונים נכשל: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "ניהול מודלים מוחרגים"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "הפריסט \"{name}\" כבר קיים. לדרוס?",
|
||||
"presetNamePlaceholder": "שם קביעה מראש...",
|
||||
"baseModel": "מודל בסיס",
|
||||
"baseModelSearchPlaceholder": "חפש מודלי בסיס...",
|
||||
"modelTags": "תגיות (20 המובילות)",
|
||||
"modelTypes": "סוגי מודלים",
|
||||
"license": "רישיון",
|
||||
"noCreditRequired": "ללא קרדיט נדרש",
|
||||
"allowSellingGeneratedContent": "אפשר מכירה",
|
||||
"noTags": "ללא תגיות",
|
||||
"noBaseModelMatches": "אין מודלי בסיס התואמים לחיפוש הנוכחי.",
|
||||
"clearAll": "נקה את כל המסננים",
|
||||
"any": "כלשהו",
|
||||
"all": "כל התגים",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "מפתח API של Civitai",
|
||||
"civitaiApiKeyPlaceholder": "הזן את מפתח ה-API שלך מ-Civitai",
|
||||
"civitaiApiKeyHelp": "משמש לאימות בעת הורדת מודלים מ-Civitai",
|
||||
"civitaiHost": {
|
||||
"label": "מארח Civitai",
|
||||
"help": "בחר איזה אתר של Civitai ייפתח בעת שימוש בקישורי \"View on Civitai\".",
|
||||
"options": {
|
||||
"com": "civitai.com (SFW בלבד)",
|
||||
"red": "civitai.red (ללא הגבלות)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "מנגנון הורדה",
|
||||
"help": "בחר כיצד יורדים קבצי המודל. Python משתמש במוריד המובנה. aria2 משתמש בתהליך הורדה חיצוני ניסיוני.",
|
||||
"options": {
|
||||
"python": "Python (מובנה)",
|
||||
"aria2": "aria2 (ניסיוני)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "נתיב aria2c",
|
||||
"help": "נתיב אופציונלי לקובץ ההפעלה aria2c. השאר ריק כדי להשתמש ב-aria2c מתוך ה-PATH של המערכת.",
|
||||
"placeholder": "השאר ריק כדי להשתמש ב-aria2c מתוך ה-PATH"
|
||||
},
|
||||
"aria2HelpLink": "למד כיצד להגדיר את מנוע ההורדה aria2",
|
||||
"civitaiHostBanner": {
|
||||
"title": "העדפת מארח Civitai זמינה",
|
||||
"content": "Civitai משתמש כעת ב-civitai.com עבור תוכן SFW וב-civitai.red עבור תוכן ללא הגבלות. ניתן לשנות בהגדרות איזה אתר ייפתח כברירת מחדל.",
|
||||
"openSettings": "פתח הגדרות"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "פתח תיקיית הגדרות",
|
||||
"tooltip": "פתח את התיקייה שמכילה את settings.json",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "סינון תוכן",
|
||||
"downloads": "הורדות",
|
||||
"videoSettings": "הגדרות וידאו",
|
||||
"layoutSettings": "הגדרות פריסה",
|
||||
"misc": "שונות",
|
||||
"backup": "גיבויים",
|
||||
"folderSettings": "תיקיות ברירת מחדל",
|
||||
"recipeSettings": "מתכונים",
|
||||
"extraFolderPaths": "נתיבי תיקיות נוספים",
|
||||
"downloadPathTemplates": "תבניות נתיב הורדה",
|
||||
"priorityTags": "תגיות עדיפות",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "טשטש תוכן NSFW",
|
||||
"blurNsfwContentHelp": "טשטש תמונות תצוגה מקדימה של תוכן למבוגרים (NSFW)",
|
||||
"showOnlySfw": "הצג רק תוצאות SFW",
|
||||
"showOnlySfwHelp": "סנן את כל התוכן ה-NSFW בעת גלישה וחיפוש"
|
||||
"showOnlySfwHelp": "סנן את כל התוכן ה-NSFW בעת גלישה וחיפוש",
|
||||
"matureBlurThreshold": "סף טשטוש תוכן מבוגרים",
|
||||
"matureBlurThresholdHelp": "הגדר מאיזו רמת דירוג מתחיל סינון הטשטוש כאשר טשטוש NSFW מופעל.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 ומעלה",
|
||||
"r": "R ומעלה (ברירת מחדל)",
|
||||
"x": "X ומעלה",
|
||||
"xxx": "XXX בלבד"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "נגן וידאו אוטומטית בריחוף",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "לא ניתן לשמור נתיבי דילוג: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "גיבויים אוטומטיים",
|
||||
"autoEnabledHelp": "יוצר צילום מצב מקומי פעם ביום ושומר את הצילומים האחרונים לפי מדיניות השמירה.",
|
||||
"retention": "כמות שמירה",
|
||||
"retentionHelp": "כמה צילומי מצב אוטומטיים לשמור לפני שמסירים ישנים.",
|
||||
"management": "ניהול גיבויים",
|
||||
"managementHelp": "ייצא את מצב המשתמש הנוכחי או שחזר אותו מארכיון גיבוי.",
|
||||
"scopeHelp": "כולל את ההגדרות שלך, היסטוריית ההורדות ומצב עדכוני המודלים. אינו כולל קובצי מודל או מטמונים שניתן לשחזר.",
|
||||
"locationSummary": "מיקום הגיבוי הנוכחי",
|
||||
"openFolderButton": "פתח את תיקיית הגיבויים",
|
||||
"openFolderSuccess": "תיקיית הגיבויים נפתחה",
|
||||
"openFolderFailed": "לא ניתן היה לפתוח את תיקיית הגיבויים",
|
||||
"locationCopied": "נתיב הגיבוי הועתק ללוח: {{path}}",
|
||||
"locationClipboardFallback": "נתיב הגיבוי: {{path}}",
|
||||
"exportButton": "ייצא גיבוי",
|
||||
"exportSuccess": "הגיבוי יוצא בהצלחה.",
|
||||
"exportFailed": "נכשל ייצוא הגיבוי: {message}",
|
||||
"importButton": "ייבא גיבוי",
|
||||
"importConfirm": "לייבא את הגיבוי הזה ולדרוס את מצב המשתמש המקומי?",
|
||||
"importSuccess": "הגיבוי יובא בהצלחה.",
|
||||
"importFailed": "נכשל ייבוא הגיבוי: {message}",
|
||||
"latestSnapshot": "צילום המצב האחרון",
|
||||
"latestAutoSnapshot": "צילום המצב האוטומטי האחרון",
|
||||
"snapshotCount": "צילומי מצב שמורים",
|
||||
"noneAvailable": "עדיין אין צילומי מצב"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "דלג על הורדות עבור מודלי בסיס",
|
||||
"help": "חל על כל תהליכי ההורדה. ניתן לבחור כאן רק מודלי בסיס נתמכים.",
|
||||
"searchPlaceholder": "סנן מודלי בסיס...",
|
||||
"empty": "אין מודלי בסיס התואמים לחיפוש הנוכחי.",
|
||||
"summary": {
|
||||
"none": "לא נבחר דבר",
|
||||
"count": "{count} נבחרו"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "עריכה",
|
||||
"collapse": "כווץ",
|
||||
"clear": "נקה"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "לא ניתן לשמור את מודלי הבסיס המוחרגים: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "דלג על גרסאות מודלים שהורדו בעבר",
|
||||
"help": "כאשר מופעל, LoRA Manager ידלג על הורדת גרסת מודל אם שירות היסטוריית ההורדות רושם את הגרסה המדויקת הזו ככבר שהורדה. חל על כל תהליכי ההורדה."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "צפיפות תצוגה",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "חשוף בריחוף"
|
||||
},
|
||||
"cardInfoDisplayHelp": "בחר מתי להציג מידע על המודל וכפתורי פעולה",
|
||||
"showVersionOnCard": "הצג גרסה בכרטיס",
|
||||
"showVersionOnCardHelp": "הצג או הסתר את שם הגרסה בכרטיסי המודל",
|
||||
"modelCardFooterAction": "פעולת כפתור כרטיס מודל",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "פתח תמונות דוגמה",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "הגדר את ספריית השורש המוגדרת כברירת מחדל של Diffusion Model (UNET) להורדות, ייבוא והעברות",
|
||||
"defaultEmbeddingRoot": "תיקיית שורש Embedding",
|
||||
"defaultEmbeddingRootHelp": "הגדר את ספריית השורש המוגדרת כברירת מחדל של embedding להורדות, ייבוא והעברות",
|
||||
"recipesPath": "נתיב אחסון מתכונים",
|
||||
"recipesPathHelp": "ספרייה מותאמת אישית אופציונלית למתכונים שנשמרו. השאר ריק כדי להשתמש בתיקיית recipes של שורש LoRA הראשון.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "מעביר את אחסון המתכונים...",
|
||||
"noDefault": "אין ברירת מחדל"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "נתיבי תיקיות נוספים",
|
||||
"help": "הוסף תיקיות מודלים נוספות מחוץ לנתיבים הסטנדרטיים של ComfyUI. נתיבים אלה נשמרים בנפרד ונסרקים לצד תיקיות ברירת המחדל.",
|
||||
"description": "הגדר תיקיות נוספות לסריקת מודלים. נתיבים אלה ספציפיים ל-LoRA Manager וימוזגו עם נתיבי ברירת המחדל של ComfyUI.",
|
||||
"description": "נתיבי שורש מודלים נוספים בלעדיים ל-LoRA Manager. טען מודלים ממיקומים מחוץ לתיקיות הסטנדרטיות של ComfyUI - אידיאלי לספריות גדולות שאחרת יאטו את ComfyUI.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "נתיבי LoRA",
|
||||
"checkpoint": "נתיבי Checkpoint",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "נתיבי Embedding"
|
||||
},
|
||||
"pathPlaceholder": "/נתיב/למודלים/נוספים",
|
||||
"saveSuccess": "נתיבי תיקיות נוספים עודכנו.",
|
||||
"saveSuccess": "נתיבי תיקיות נוספים עודכנו. נדרשת הפעלה מחדש כדי להחיל את השינויים.",
|
||||
"saveError": "נכשל בעדכון נתיבי תיקיות נוספים: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "נתיב זה כבר מוגדר"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "הזן את נתיב התיקייה שבו יישמרו תמונות דוגמה מ-Civitai",
|
||||
"autoDownload": "הורדה אוטומטית של תמונות דוגמה",
|
||||
"autoDownloadHelp": "הורד אוטומטית תמונות דוגמה למודלים שאין להם (דורש הגדרת מיקום הורדה)",
|
||||
"openMode": "פעולת פתיחת תמונות דוגמה",
|
||||
"openModeHelp": "בחר אם הפעולה תיפתח בשרת, תעתיק נתיב מקומי ממופה או תפעיל URI מותאם אישית.",
|
||||
"openModeOptions": {
|
||||
"system": "פתח בשרת",
|
||||
"clipboard": "העתק נתיב מקומי",
|
||||
"uriTemplate": "פתח URI מותאם אישית"
|
||||
},
|
||||
"localRoot": "שורש מקומי לתמונות דוגמה",
|
||||
"localRootHelp": "שורש מקומי או ממופה אופציונלי שמשקף את תיקיית תמונות הדוגמה בשרת. אם השדה ריק, ייעשה שימוש חוזר בנתיב השרת.",
|
||||
"localRootPlaceholder": "דוגמה: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "תבנית URI לפתיחה",
|
||||
"uriTemplateHelp": "השתמש בקישור עומק מותאם אישית כמו URI של קובץ או קישור Shortcuts.",
|
||||
"uriTemplatePlaceholder": "דוגמה: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "מצייני מקום זמינים: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "למידע נוסף על מצבי פתיחה מרחוק",
|
||||
"optimizeImages": "מטב תמונות שהורדו",
|
||||
"optimizeImagesHelp": "מטב תמונות דוגמה כדי להקטין את גודל הקובץ ולשפר את מהירות הטעינה (מטא-דאטה תישמר)",
|
||||
"download": "הורד",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "ארגן אוטומטית נבחרים",
|
||||
"skipMetadataRefresh": "דילוג על רענון מטא-נתונים לנבחרים",
|
||||
"resumeMetadataRefresh": "המשך רענון מטא-נתונים לנבחרים",
|
||||
"deleteAll": "מחק את כל המודלים",
|
||||
"deleteAll": "מחק נבחרים",
|
||||
"downloadMissingLoras": "הורדת LoRAs חסרים",
|
||||
"clear": "נקה בחירה",
|
||||
"skipMetadataRefreshCount": "דילוג({count} מודלים)",
|
||||
"resumeMetadataRefreshCount": "המשך({count} מודלים)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "העבר לתיקייה",
|
||||
"repairMetadata": "תיקון מטא-דאטה",
|
||||
"excludeModel": "החרג מודל",
|
||||
"restoreModel": "שחזור מודל",
|
||||
"deleteModel": "מחק מודל",
|
||||
"shareRecipe": "שתף מתכון",
|
||||
"viewAllLoras": "הצג את כל ה-LoRAs",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "העבר לתיקיית {otherType}"
|
||||
"moveToOtherTypeFolder": "העבר לתיקיית {otherType}",
|
||||
"sendToWorkflow": "שלח ל-workflow"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "שחרר סרגל צד",
|
||||
"switchToListView": "עבור לתצוגת רשימה",
|
||||
"switchToTreeView": "תצוגת עץ",
|
||||
"recursiveOn": "חיפוש בתיקיות משנה",
|
||||
"recursiveOff": "חיפוש רק בתיקייה הנוכחית",
|
||||
"recursiveOn": "כלול תיקיות משנה",
|
||||
"recursiveOff": "רק התיקייה הנוכחית",
|
||||
"recursiveUnavailable": "חיפוש רקורסיבי זמין רק בתצוגת עץ",
|
||||
"collapseAllDisabled": "לא זמין בתצוגת רשימה",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "גישה מוקדמת",
|
||||
"earlyAccessTooltip": "נדרשת גישה מוקדמת",
|
||||
"inLibrary": "בספרייה",
|
||||
"downloaded": "הורד",
|
||||
"downloadedTooltip": "הורד בעבר, אך הוא אינו נמצא כרגע בספרייה שלך.",
|
||||
"alreadyInLibrary": "כבר בספרייה",
|
||||
"autoOrganizedPath": "[מאורגן אוטומטית לפי תבנית נתיב]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "עדכן מודל בסיס",
|
||||
"cancel": "ביטול"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "הורדת LoRAs חסרים",
|
||||
"message": "נמצאו {uniqueCount} LoRAs חסרים ייחודיים (מתוך {totalCount} בסך הכל במתכונים שנבחרו).",
|
||||
"previewTitle": "LoRAs להורדה:",
|
||||
"moreItems": "...ועוד {count}",
|
||||
"note": "הקבצים יורדו באמצעות תבניות נתיב ברירת מחדל. זה עשוי לקחת זמן בהתאם למספר ה-LoRAs.",
|
||||
"downloadButton": "הורד {count} LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "תמונות דוגמה מקומיות",
|
||||
"message": "לא נמצאו תמונות דוגמה מקומיות למודל זה. אפשרויות צפייה:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "הצג ב-Civitai",
|
||||
"viewOnCivitaiText": "הצג ב-Civitai",
|
||||
"viewCreatorProfile": "הצג פרופיל יוצר",
|
||||
"openFileLocation": "פתח מיקום קובץ"
|
||||
"openFileLocation": "פתח מיקום קובץ",
|
||||
"sendToWorkflow": "שלח ל-ComfyUI",
|
||||
"sendToWorkflowText": "שלח ל-ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "מיקום הקובץ נפתח בהצלחה",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "הנתיב הועתק ללוח העריכה: {{path}}",
|
||||
"clipboardFallback": "נתיב: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "לא ניתן לשלוח ל-ComfyUI: אין נתיב קובץ זמין"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "גרסה",
|
||||
"fileName": "שם קובץ",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "בטל עריכה",
|
||||
"save": "שמור שינויים",
|
||||
"addPlaceholder": "הקלד להוספה או לחץ על הצעות למטה",
|
||||
"editWord": "עריכת מילת טריגר",
|
||||
"editPlaceholder": "עריכת מילת טריגר",
|
||||
"copyWord": "העתק מילת טריגר",
|
||||
"deleteWord": "מחק מילת טריגר",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "בעוד {count} ימים"
|
||||
},
|
||||
"badges": {
|
||||
"current": "גרסה נוכחית",
|
||||
"current": "גרסה שנפתחה",
|
||||
"currentTooltip": "זוהי הגרסה שממנה נפתח החלון הזה",
|
||||
"inLibrary": "בספרייה",
|
||||
"inLibraryTooltip": "גרסה זו קיימת בספרייה המקומית שלך",
|
||||
"downloaded": "הורד",
|
||||
"downloadedTooltip": "גרסה זו הורדה בעבר, אך אינה נמצאת כרגע בספרייה שלך",
|
||||
"newer": "גרסה חדשה יותר",
|
||||
"newerTooltip": "גרסה זו חדשה יותר מהגרסה המקומית האחרונה שלך",
|
||||
"earlyAccess": "גישה מוקדמת",
|
||||
"ignored": "התעלם"
|
||||
"earlyAccessTooltip": "גרסה זו דורשת כרגע גישת Early Access של Civitai",
|
||||
"ignored": "התעלם",
|
||||
"ignoredTooltip": "התראות העדכון מושבתות עבור גרסה זו",
|
||||
"onSiteOnly": "רק באתר",
|
||||
"onSiteOnlyTooltip": "גרסה זו זמינה רק ליצירה באתר Civitai"
|
||||
},
|
||||
"actions": {
|
||||
"download": "הורדה",
|
||||
"downloadTooltip": "הורד את הגרסה הזו",
|
||||
"downloadEarlyAccessTooltip": "הורד את גרסת ה-Early Access הזו מ-Civitai",
|
||||
"downloadNotAllowedTooltip": "גרסה זו זמינה רק ליצירה באתר Civitai",
|
||||
"delete": "מחיקה",
|
||||
"deleteTooltip": "מחק את הגרסה המקומית הזו",
|
||||
"ignore": "התעלם",
|
||||
"unignore": "בטל התעלמות",
|
||||
"ignoreTooltip": "התעלם מהתראות העדכון עבור גרסה זו",
|
||||
"unignoreTooltip": "חזור לקבל התראות עדכון עבור גרסה זו",
|
||||
"viewVersionOnCivitai": "הצג את הגרסה ב-Civitai",
|
||||
"earlyAccessTooltip": "נדרש רכישת גישה מוקדמת",
|
||||
"resumeModelUpdates": "המשך עדכונים עבור מודל זה",
|
||||
"ignoreModelUpdates": "התעלם מעדכונים עבור מודל זה",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "מתכון הוחלף ב-workflow",
|
||||
"recipeFailedToSend": "שליחת מתכון ל-workflow נכשלה",
|
||||
"noMatchingNodes": "אין צמתים תואמים זמינים ב-workflow הנוכחי",
|
||||
"noTargetNodeSelected": "לא נבחר צומת יעד"
|
||||
"noTargetNodeSelected": "לא נבחר צומת יעד",
|
||||
"modelUpdated": "מודל עודכן ב-workflow",
|
||||
"modelFailed": "עדכון צומת המודל נכשל"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "מתכון",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "תיקיית תמונות הדוגמה נפתחה",
|
||||
"openingFolder": "פותח תיקיית תמונות דוגמה",
|
||||
"failedToOpen": "פתיחת תיקיית תמונות הדוגמה נכשלה",
|
||||
"copiedPath": "הנתיב הועתק ללוח: {{path}}",
|
||||
"clipboardFallback": "נתיב: {{path}}",
|
||||
"copiedUri": "הקישור הועתק ללוח: {{uri}}",
|
||||
"uriClipboardFallback": "קישור: {{uri}}",
|
||||
"setupRequired": "אחסון תמונות דוגמה",
|
||||
"setupDescription": "כדי להוסיף תמונות דוגמה מותאמות אישית, עליך קודם להגדיר מיקום הורדה.",
|
||||
"setupUsage": "נתיב זה משמש הן עבור תמונות דוגמה שהורדו והן עבור תמונות מותאמות אישית.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "אנא בחר גרסה",
|
||||
"versionExists": "גרסה זו כבר קיימת בספרייה שלך",
|
||||
"downloadCompleted": "ההורדה הושלמה בהצלחה",
|
||||
"downloadSkippedByBaseModel": "ההורדה דולגה כי מודל הבסיס {baseModel} מוחרג",
|
||||
"autoOrganizeSuccess": "הארגון האוטומטי הושלם בהצלחה עבור {count} {type}",
|
||||
"autoOrganizePartialSuccess": "הארגון האוטומטי הושלם עם {success} שהועברו, {failures} שנכשלו מתוך {total} מודלים",
|
||||
"autoOrganizeFailed": "הארגון האוטומטי נכשל: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "שם המתכון עודכן בהצלחה",
|
||||
"tagsUpdated": "תגיות המתכון עודכנו בהצלחה",
|
||||
"sourceUrlUpdated": "כתובת ה-URL המקורית עודכנה בהצלחה",
|
||||
"promptUpdated": "הפרומפט עודכן בהצלחה",
|
||||
"negativePromptUpdated": "הפרומפט השלילי עודכן בהצלחה",
|
||||
"promptEditorHint": "לחץ Enter לשמירה, Shift+Enter לשורה חדשה",
|
||||
"noRecipeId": "אין מזהה מתכון זמין",
|
||||
"sendToWorkflowFailed": "נכשל שליחת המתכון ל-workflow: {message}",
|
||||
"copyFailed": "שגיאה בהעתקת תחביר המתכון: {message}",
|
||||
"noMissingLoras": "אין LoRAs חסרים להורדה",
|
||||
"missingLorasInfoFailed": "קבלת מידע עבור LoRAs חסרים נכשלה",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "לא נבחרו מתכונים",
|
||||
"noMissingLorasInSelection": "לא נמצאו LoRAs חסרים במתכונים שנבחרו",
|
||||
"noLoraRootConfigured": "תיקיית השורש של LoRA לא מוגדרת. אנא הגדר תיקיית שורש LoRA ברירת מחדל בהגדרות."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "לא נבחרו מודלים",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "שמירת מיפויי מודל בסיס נכשלה: {message}",
|
||||
"downloadTemplatesUpdated": "תבניות נתיב הורדה עודכנו",
|
||||
"downloadTemplatesFailed": "שמירת תבניות נתיב הורדה נכשלה: {message}",
|
||||
"recipesPathUpdated": "נתיב אחסון המתכונים עודכן",
|
||||
"recipesPathSaveFailed": "עדכון נתיב אחסון המתכונים נכשל: {message}",
|
||||
"settingsUpdated": "הגדרות עודכנו: {setting}",
|
||||
"compactModeToggled": "מצב קומפקטי {state}",
|
||||
"settingSaveFailed": "שמירת ההגדרה נכשלה: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "לא ניתן היה לטעון מילים מאומנות",
|
||||
"tooLong": "מילת טריגר לא תעלה על 100 מילים",
|
||||
"tooMany": "מותרות עד 30 מילות טריגר",
|
||||
"tooLong": "מילת טריגר לא תעלה על 500 מילים",
|
||||
"tooMany": "מותרות עד 100 מילות טריגר",
|
||||
"alreadyExists": "מילת טריגר זו כבר קיימת",
|
||||
"updateSuccess": "מילות הטריגר עודכנו בהצלחה",
|
||||
"updateFailed": "עדכון מילות הטריגר נכשל",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "מחיקת {type} נכשלה: {message}",
|
||||
"excludeSuccess": "{type} הוחרג בהצלחה",
|
||||
"excludeFailed": "החרגת {type} נכשלה: {message}",
|
||||
"restoreSuccess": "{type} שוחזר בהצלחה",
|
||||
"restoreFailed": "שחזור {type} נכשל: {message}",
|
||||
"fileNameUpdated": "שם הקובץ עודכן בהצלחה",
|
||||
"fileRenameFailed": "שינוי שם הקובץ נכשל: {error}",
|
||||
"previewUpdated": "התצוגה המקדימה עודכנה בהצלחה",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "אבחון מערכת",
|
||||
"title": "דוקטור",
|
||||
"buttonTitle": "הפעלת אבחון ותיקונים נפוצים",
|
||||
"loading": "בודק את הסביבה...",
|
||||
"footer": "ייצא חבילת אבחון אם הבעיה עדיין נמשכת לאחר התיקון.",
|
||||
"summary": {
|
||||
"idle": "הרץ בדיקת תקינות עבור הגדרות, שלמות המטמון ועקביות הממשק.",
|
||||
"ok": "לא נמצאו בעיות פעילות בסביבה הנוכחית.",
|
||||
"warning": "נמצאה/נמצאו {count} בעיה/בעיות. את רובן אפשר לתקן ישירות מלוח זה.",
|
||||
"error": "יש לטפל ב-{count} בעיה/בעיות לפני שהאפליקציה תהיה תקינה לחלוטין."
|
||||
},
|
||||
"status": {
|
||||
"ok": "תקין",
|
||||
"warning": "דורש תשומת לב",
|
||||
"error": "נדרשת פעולה"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "הפעל שוב",
|
||||
"exportBundle": "ייצוא חבילה"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "טעינת האבחון נכשלה: {message}",
|
||||
"repairSuccess": "בניית המטמון מחדש הושלמה.",
|
||||
"repairFailed": "בניית המטמון מחדש נכשלה: {message}",
|
||||
"exportSuccess": "חבילת האבחון יוצאה.",
|
||||
"exportFailed": "ייצוא חבילת האבחון נכשל: {message}",
|
||||
"conflictsResolved": "נפתרו {count} התנגשויות בשמות קבצים.",
|
||||
"conflictsResolveFailed": "פתרון התנגשויות שמות קבצים נכשל: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "זוהה עדכון יישום",
|
||||
|
||||
230
locales/ja.json
230
locales/ja.json
@@ -15,7 +15,8 @@
|
||||
"settings": "設定",
|
||||
"help": "ヘルプ",
|
||||
"add": "追加",
|
||||
"close": "閉じる"
|
||||
"close": "閉じる",
|
||||
"menu": "メニュー"
|
||||
},
|
||||
"status": {
|
||||
"loading": "読み込み中...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "{count} 件のレシピを正常に修復しました。",
|
||||
"cancelled": "修復がキャンセルされました。{count}個のレシピが修復されました。",
|
||||
"error": "レシピの修復に失敗しました: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "除外モデルを管理"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "プリセット「{name}」は既に存在します。上書きしますか?",
|
||||
"presetNamePlaceholder": "プリセット名...",
|
||||
"baseModel": "ベースモデル",
|
||||
"baseModelSearchPlaceholder": "ベースモデルを検索...",
|
||||
"modelTags": "タグ(上位20)",
|
||||
"modelTypes": "モデルタイプ",
|
||||
"license": "ライセンス",
|
||||
"noCreditRequired": "クレジット不要",
|
||||
"allowSellingGeneratedContent": "販売許可",
|
||||
"noTags": "タグなし",
|
||||
"noBaseModelMatches": "現在の検索に一致するベースモデルはありません。",
|
||||
"clearAll": "すべてのフィルタをクリア",
|
||||
"any": "いずれか",
|
||||
"all": "すべて",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai APIキー",
|
||||
"civitaiApiKeyPlaceholder": "Civitai APIキーを入力してください",
|
||||
"civitaiApiKeyHelp": "Civitaiからモデルをダウンロードするときの認証に使用されます",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai ホスト",
|
||||
"help": "「View on Civitai」リンクを使うときに開く Civitai サイトを選択します。",
|
||||
"options": {
|
||||
"com": "civitai.com(SFW のみ)",
|
||||
"red": "civitai.red(制限なし)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "ダウンロードバックエンド",
|
||||
"help": "モデルファイルのダウンロード方法を選択します。Python は内蔵ダウンローダーを使用し、aria2 は実験的な外部ダウンローダープロセスを使用します。",
|
||||
"options": {
|
||||
"python": "Python(内蔵)",
|
||||
"aria2": "aria2(実験的)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c のパス",
|
||||
"help": "aria2c 実行ファイルへの任意のパスです。空欄のままにすると、システム PATH 上の aria2c を使用します。",
|
||||
"placeholder": "空欄のままにすると PATH 上の aria2c を使用します"
|
||||
},
|
||||
"aria2HelpLink": "aria2 ダウンロードバックエンドの設定方法",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Civitai ホスト設定を利用できます",
|
||||
"content": "Civitai は現在、SFW コンテンツには civitai.com、制限なしコンテンツには civitai.red を使用しています。設定で既定で開くサイトを変更できます。",
|
||||
"openSettings": "設定を開く"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "設定フォルダーを開く",
|
||||
"tooltip": "settings.json を含むフォルダーを開きます",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "コンテンツフィルタリング",
|
||||
"downloads": "ダウンロード",
|
||||
"videoSettings": "動画設定",
|
||||
"layoutSettings": "レイアウト設定",
|
||||
"misc": "その他",
|
||||
"backup": "バックアップ",
|
||||
"folderSettings": "デフォルトルート",
|
||||
"recipeSettings": "レシピ",
|
||||
"extraFolderPaths": "追加フォルダーパス",
|
||||
"downloadPathTemplates": "ダウンロードパステンプレート",
|
||||
"priorityTags": "優先タグ",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "NSFWコンテンツをぼかす",
|
||||
"blurNsfwContentHelp": "成人向け(NSFW)コンテンツのプレビュー画像をぼかします",
|
||||
"showOnlySfw": "SFWコンテンツのみ表示",
|
||||
"showOnlySfwHelp": "閲覧と検索時にすべてのNSFWコンテンツを除外します"
|
||||
"showOnlySfwHelp": "閲覧と検索時にすべてのNSFWコンテンツを除外します",
|
||||
"matureBlurThreshold": "成人コンテンツぼかし閾値",
|
||||
"matureBlurThresholdHelp": "NSFWぼかしが有効な場合、どのレーティングレベルからぼかしフィルタリングを開始するかを設定します。",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 以上",
|
||||
"r": "R 以上(デフォルト)",
|
||||
"x": "X 以上",
|
||||
"xxx": "XXX のみ"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "ホバー時に動画を自動再生",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "スキップパスの保存に失敗しました:{message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "自動バックアップ",
|
||||
"autoEnabledHelp": "1日1回ローカルのスナップショットを作成し、保持ポリシーに従って最新のものを残します。",
|
||||
"retention": "保持数",
|
||||
"retentionHelp": "古いものを削除する前に、何件の自動スナップショットを保持するかを指定します。",
|
||||
"management": "バックアップ管理",
|
||||
"managementHelp": "現在のユーザー状態をエクスポートするか、バックアップアーカイブから復元します。",
|
||||
"scopeHelp": "設定、ダウンロード履歴、モデル更新の状態をバックアップします。モデルファイルや再生成できるキャッシュは含まれません。",
|
||||
"locationSummary": "現在のバックアップ場所",
|
||||
"openFolderButton": "バックアップフォルダを開く",
|
||||
"openFolderSuccess": "バックアップフォルダを開きました",
|
||||
"openFolderFailed": "バックアップフォルダを開けませんでした",
|
||||
"locationCopied": "バックアップパスをクリップボードにコピーしました: {{path}}",
|
||||
"locationClipboardFallback": "バックアップパス: {{path}}",
|
||||
"exportButton": "バックアップをエクスポート",
|
||||
"exportSuccess": "バックアップを正常にエクスポートしました。",
|
||||
"exportFailed": "バックアップのエクスポートに失敗しました: {message}",
|
||||
"importButton": "バックアップをインポート",
|
||||
"importConfirm": "このバックアップをインポートして、ローカルのユーザー状態を上書きしますか?",
|
||||
"importSuccess": "バックアップを正常にインポートしました。",
|
||||
"importFailed": "バックアップのインポートに失敗しました: {message}",
|
||||
"latestSnapshot": "最新のスナップショット",
|
||||
"latestAutoSnapshot": "最新の自動スナップショット",
|
||||
"snapshotCount": "保存済みスナップショット",
|
||||
"noneAvailable": "まだスナップショットはありません"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "ベースモデルのダウンロードをスキップ",
|
||||
"help": "すべてのダウンロードフローに適用されます。ここでは対応しているベースモデルのみ選択できます。",
|
||||
"searchPlaceholder": "ベースモデルを絞り込む...",
|
||||
"empty": "現在の検索に一致するベースモデルはありません。",
|
||||
"summary": {
|
||||
"none": "未選択",
|
||||
"count": "{count} 件を選択"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "編集",
|
||||
"collapse": "折りたたむ",
|
||||
"clear": "クリア"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "除外するベースモデルを保存できませんでした: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "以前にダウンロードしたモデルバージョンをスキップ",
|
||||
"help": "有効にすると、ダウンロード履歴サービスがそのバージョンが既にダウンロード済みと記録している場合、LoRA Managerはそのモデルバージョンのダウンロードをスキップします。すべてのダウンロードフローに適用されます。"
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "表示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "ホバー時に表示"
|
||||
},
|
||||
"cardInfoDisplayHelp": "モデル情報とアクションボタンの表示タイミングを選択",
|
||||
"showVersionOnCard": "カードにバージョンを表示",
|
||||
"showVersionOnCardHelp": "モデルカード上のバージョン名の表示/非表示を切り替えます",
|
||||
"modelCardFooterAction": "モデルカードボタンのアクション",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "例画像を開く",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "ダウンロード、インポート、移動用のデフォルトDiffusion Model (UNET)ルートディレクトリを設定",
|
||||
"defaultEmbeddingRoot": "Embeddingルート",
|
||||
"defaultEmbeddingRootHelp": "ダウンロード、インポート、移動用のデフォルトembeddingルートディレクトリを設定",
|
||||
"recipesPath": "レシピ保存先",
|
||||
"recipesPathHelp": "保存済みレシピ用の任意のカスタムディレクトリです。空欄にすると最初のLoRAルートのrecipesフォルダーを使用します。",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "レシピ保存先を移動中...",
|
||||
"noDefault": "デフォルトなし"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "追加フォルダーパス",
|
||||
"help": "ComfyUIの標準パスの外部に追加のモデルフォルダを追加します。これらのパスは別々に保存され、デフォルトのフォルダと一緒にスキャンされます。",
|
||||
"description": "モデルをスキャンするための追加フォルダを設定します。これらのパスはLoRA Manager固有であり、ComfyUIのデフォルトパスとマージされます。",
|
||||
"description": "LoRA Manager専用の追加モデルルートパス。ComfyUIの標準フォルダー外の場所からモデルを読み込みます。ComfyUIの動作を低下させる可能性のある大規模ライブラリに最適です。",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "LoRAパス",
|
||||
"checkpoint": "Checkpointパス",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embeddingパス"
|
||||
},
|
||||
"pathPlaceholder": "/追加モデルへのパス",
|
||||
"saveSuccess": "追加フォルダーパスを更新しました。",
|
||||
"saveSuccess": "追加フォルダーパスを更新しました。変更を適用するには再起動が必要です。",
|
||||
"saveError": "追加フォルダーパスの更新に失敗しました: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "このパスはすでに設定されています"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Civitaiからの例画像を保存するフォルダパスを入力してください",
|
||||
"autoDownload": "例画像の自動ダウンロード",
|
||||
"autoDownloadHelp": "例画像がないモデルの例画像を自動的にダウンロードします(ダウンロード場所の設定が必要)",
|
||||
"openMode": "サンプル画像を開く動作",
|
||||
"openModeHelp": "サーバー上で開くか、対応するローカルパスをコピーするか、カスタム URI を起動するかを選択します。",
|
||||
"openModeOptions": {
|
||||
"system": "サーバー上で開く",
|
||||
"clipboard": "ローカルパスをコピー",
|
||||
"uriTemplate": "カスタム URI を開く"
|
||||
},
|
||||
"localRoot": "ローカルのサンプル画像ルート",
|
||||
"localRootHelp": "サーバーのサンプル画像ディレクトリを反映する任意のローカルまたはマウント済みルートです。空欄の場合はサーバーのパスを再利用します。",
|
||||
"localRootPlaceholder": "例: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "URI テンプレートを開く",
|
||||
"uriTemplateHelp": "ファイル URI や Shortcuts リンクなどのカスタムディープリンクを使用します。",
|
||||
"uriTemplatePlaceholder": "例: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "使用可能なプレースホルダー: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "リモートオープンモードの詳細",
|
||||
"optimizeImages": "ダウンロード画像の最適化",
|
||||
"optimizeImagesHelp": "例画像を最適化してファイルサイズを縮小し、読み込み速度を向上させます(メタデータは保持されます)",
|
||||
"download": "ダウンロード",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "自動整理を実行",
|
||||
"skipMetadataRefresh": "選択したモデルのメタデータ更新をスキップ",
|
||||
"resumeMetadataRefresh": "選択したモデルのメタデータ更新を再開",
|
||||
"deleteAll": "すべてのモデルを削除",
|
||||
"deleteAll": "選択したものを削除",
|
||||
"downloadMissingLoras": "不足している LoRA をダウンロード",
|
||||
"clear": "選択をクリア",
|
||||
"skipMetadataRefreshCount": "スキップ({count}モデル)",
|
||||
"resumeMetadataRefreshCount": "再開({count}モデル)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "フォルダに移動",
|
||||
"repairMetadata": "メタデータを修復",
|
||||
"excludeModel": "モデルを除外",
|
||||
"restoreModel": "モデルを復元",
|
||||
"deleteModel": "モデルを削除",
|
||||
"shareRecipe": "レシピを共有",
|
||||
"viewAllLoras": "すべてのLoRAを表示",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "{otherType} フォルダに移動"
|
||||
"moveToOtherTypeFolder": "{otherType} フォルダに移動",
|
||||
"sendToWorkflow": "ワークフローに送信"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "サイドバーの固定を解除",
|
||||
"switchToListView": "リストビューに切り替え",
|
||||
"switchToTreeView": "ツリー表示に切り替え",
|
||||
"recursiveOn": "サブフォルダーを検索",
|
||||
"recursiveOff": "現在のフォルダーのみを検索",
|
||||
"recursiveOn": "サブフォルダーを含める",
|
||||
"recursiveOff": "現在のフォルダーのみ",
|
||||
"recursiveUnavailable": "再帰検索はツリービューでのみ利用できます",
|
||||
"collapseAllDisabled": "リストビューでは利用できません",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "アーリーアクセス",
|
||||
"earlyAccessTooltip": "アーリーアクセスが必要",
|
||||
"inLibrary": "ライブラリ内",
|
||||
"downloaded": "ダウンロード済み",
|
||||
"downloadedTooltip": "以前にダウンロード済みですが、現在はライブラリにありません。",
|
||||
"alreadyInLibrary": "既にライブラリ内",
|
||||
"autoOrganizedPath": "[パステンプレートによる自動整理]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "ベースモデルを更新",
|
||||
"cancel": "キャンセル"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "不足している LoRA をダウンロード",
|
||||
"message": "選択したレシピから合計 {totalCount} 個中 {uniqueCount} 個のユニークな不足している LoRA が見つかりました。",
|
||||
"previewTitle": "ダウンロードする LoRA:",
|
||||
"moreItems": "...あと {count} 個",
|
||||
"note": "ファイルはデフォルトのパステンプレートを使用してダウンロードされます。LoRA の数によっては時間がかかる場合があります。",
|
||||
"downloadButton": "{count} 個の LoRA をダウンロード"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "ローカル例画像",
|
||||
"message": "このモデルのローカル例画像が見つかりませんでした。表示オプション:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Civitaiで表示",
|
||||
"viewOnCivitaiText": "Civitaiで表示",
|
||||
"viewCreatorProfile": "作成者プロフィールを表示",
|
||||
"openFileLocation": "ファイルの場所を開く"
|
||||
"openFileLocation": "ファイルの場所を開く",
|
||||
"sendToWorkflow": "ComfyUI に送信",
|
||||
"sendToWorkflowText": "ComfyUI に送信"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "ファイルの場所を正常に開きました",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "パスをクリップボードにコピーしました: {{path}}",
|
||||
"clipboardFallback": "パス: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "ComfyUI に送信できません:ファイルパスがありません"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "バージョン",
|
||||
"fileName": "ファイル名",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "編集をキャンセル",
|
||||
"save": "変更を保存",
|
||||
"addPlaceholder": "入力して追加するか、下の提案をクリック",
|
||||
"editWord": "トリガーワードを編集",
|
||||
"editPlaceholder": "トリガーワードを編集",
|
||||
"copyWord": "トリガーワードをコピー",
|
||||
"deleteWord": "トリガーワードを削除",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "{count}日後"
|
||||
},
|
||||
"badges": {
|
||||
"current": "現在のバージョン",
|
||||
"current": "開いたバージョン",
|
||||
"currentTooltip": "このモーダルを開くために選択したバージョンです",
|
||||
"inLibrary": "ライブラリにあります",
|
||||
"inLibraryTooltip": "このバージョンはローカルライブラリに存在します",
|
||||
"downloaded": "ダウンロード済み",
|
||||
"downloadedTooltip": "このバージョンは以前ダウンロードされましたが、現在はライブラリにありません",
|
||||
"newer": "新しいバージョン",
|
||||
"newerTooltip": "このバージョンはローカルの最新バージョンより新しいです",
|
||||
"earlyAccess": "早期アクセス",
|
||||
"ignored": "無視中"
|
||||
"earlyAccessTooltip": "このバージョンは現在 Civitai の早期アクセスが必要です",
|
||||
"ignored": "無視中",
|
||||
"ignoredTooltip": "このバージョンの更新通知は無効です",
|
||||
"onSiteOnly": "サイト内のみ",
|
||||
"onSiteOnlyTooltip": "このバージョンはCivitaiサイト内でのみ利用可能で、ダウンロードはできません"
|
||||
},
|
||||
"actions": {
|
||||
"download": "ダウンロード",
|
||||
"downloadTooltip": "このバージョンをダウンロード",
|
||||
"downloadEarlyAccessTooltip": "Civitai からこの早期アクセス版をダウンロード",
|
||||
"downloadNotAllowedTooltip": "このバージョンはCivitaiサイト内でのみ利用可能で、ダウンロードはできません",
|
||||
"delete": "削除",
|
||||
"deleteTooltip": "このローカルバージョンを削除",
|
||||
"ignore": "無視",
|
||||
"unignore": "無視を解除",
|
||||
"ignoreTooltip": "このバージョンの更新通知を無視",
|
||||
"unignoreTooltip": "このバージョンの更新通知を再開",
|
||||
"viewVersionOnCivitai": "Civitai でバージョンを表示",
|
||||
"earlyAccessTooltip": "早期アクセス購入が必要",
|
||||
"resumeModelUpdates": "このモデルの更新を再開",
|
||||
"ignoreModelUpdates": "このモデルの更新を無視",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "レシピがワークフローで置換されました",
|
||||
"recipeFailedToSend": "レシピをワークフローに送信できませんでした",
|
||||
"noMatchingNodes": "現在のワークフローには互換性のあるノードがありません",
|
||||
"noTargetNodeSelected": "ターゲットノードが選択されていません"
|
||||
"noTargetNodeSelected": "ターゲットノードが選択されていません",
|
||||
"modelUpdated": "モデルがワークフローで更新されました",
|
||||
"modelFailed": "モデルノードの更新に失敗しました"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "レシピ",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "例画像フォルダが開かれました",
|
||||
"openingFolder": "例画像フォルダを開いています",
|
||||
"failedToOpen": "例画像フォルダを開くのに失敗しました",
|
||||
"copiedPath": "パスをクリップボードにコピーしました: {{path}}",
|
||||
"clipboardFallback": "パス: {{path}}",
|
||||
"copiedUri": "リンクをクリップボードにコピーしました: {{uri}}",
|
||||
"uriClipboardFallback": "リンク: {{uri}}",
|
||||
"setupRequired": "例画像ストレージ",
|
||||
"setupDescription": "カスタム例画像を追加するには、まずダウンロード場所を設定する必要があります。",
|
||||
"setupUsage": "このパスは、ダウンロードした例画像とカスタム画像の両方に使用されます。",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "バージョンを選択してください",
|
||||
"versionExists": "このバージョンは既にライブラリに存在します",
|
||||
"downloadCompleted": "ダウンロードが正常に完了しました",
|
||||
"downloadSkippedByBaseModel": "ベースモデル {baseModel} が除外されているため、ダウンロードをスキップしました",
|
||||
"autoOrganizeSuccess": "{count} {type} の自動整理が正常に完了しました",
|
||||
"autoOrganizePartialSuccess": "自動整理が完了しました:{total} モデル中 {success} 移動、{failures} 失敗",
|
||||
"autoOrganizeFailed": "自動整理に失敗しました:{error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "レシピ名が正常に更新されました",
|
||||
"tagsUpdated": "レシピタグが正常に更新されました",
|
||||
"sourceUrlUpdated": "ソースURLが正常に更新されました",
|
||||
"promptUpdated": "プロンプトが正常に更新されました",
|
||||
"negativePromptUpdated": "ネガティブプロンプトが正常に更新されました",
|
||||
"promptEditorHint": "Enterキーで保存、Shift+Enterで改行",
|
||||
"noRecipeId": "レシピIDが利用できません",
|
||||
"sendToWorkflowFailed": "ワークフローへのレシピ送信に失敗しました:{message}",
|
||||
"copyFailed": "レシピ構文のコピーエラー:{message}",
|
||||
"noMissingLoras": "ダウンロードする不足LoRAがありません",
|
||||
"missingLorasInfoFailed": "不足LoRAの情報取得に失敗しました",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "レシピが選択されていません",
|
||||
"noMissingLorasInSelection": "選択したレシピに不足している LoRA が見つかりませんでした",
|
||||
"noLoraRootConfigured": "LoRA ルートディレクトリが設定されていません。設定でデフォルトの LoRA ルートを設定してください。"
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "モデルが選択されていません",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "ベースモデルマッピングの保存に失敗しました:{message}",
|
||||
"downloadTemplatesUpdated": "ダウンロードパステンプレートが更新されました",
|
||||
"downloadTemplatesFailed": "ダウンロードパステンプレートの保存に失敗しました:{message}",
|
||||
"recipesPathUpdated": "レシピ保存先を更新しました",
|
||||
"recipesPathSaveFailed": "レシピ保存先の更新に失敗しました: {message}",
|
||||
"settingsUpdated": "設定が更新されました:{setting}",
|
||||
"compactModeToggled": "コンパクトモード {state}",
|
||||
"settingSaveFailed": "設定の保存に失敗しました:{message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "学習済みワードを読み込めませんでした",
|
||||
"tooLong": "トリガーワードは100ワードを超えてはいけません",
|
||||
"tooMany": "最大30トリガーワードまで許可されています",
|
||||
"tooLong": "トリガーワードは500ワードを超えてはいけません",
|
||||
"tooMany": "最大100トリガーワードまで許可されています",
|
||||
"alreadyExists": "このトリガーワードは既に存在します",
|
||||
"updateSuccess": "トリガーワードが正常に更新されました",
|
||||
"updateFailed": "トリガーワードの更新に失敗しました",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "{type}の削除に失敗しました:{message}",
|
||||
"excludeSuccess": "{type}が正常に除外されました",
|
||||
"excludeFailed": "{type}の除外に失敗しました:{message}",
|
||||
"restoreSuccess": "{type}を復元しました",
|
||||
"restoreFailed": "{type}の復元に失敗しました: {message}",
|
||||
"fileNameUpdated": "ファイル名が正常に更新されました",
|
||||
"fileRenameFailed": "ファイル名の変更に失敗しました:{error}",
|
||||
"previewUpdated": "プレビューが正常に更新されました",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "システム診断",
|
||||
"title": "ドクター",
|
||||
"buttonTitle": "診断と一般的な修復を実行",
|
||||
"loading": "環境を確認中...",
|
||||
"footer": "修復後も問題が続く場合は、診断パッケージをエクスポートしてください。",
|
||||
"summary": {
|
||||
"idle": "設定、キャッシュ整合性、UI の一貫性をヘルスチェックします。",
|
||||
"ok": "現在の環境でアクティブな問題は見つかりませんでした。",
|
||||
"warning": "{count} 件の問題が見つかりました。ほとんどはこのパネルから直接修復できます。",
|
||||
"error": "アプリが完全に正常になる前に、{count} 件の問題に対処する必要があります。"
|
||||
},
|
||||
"status": {
|
||||
"ok": "正常",
|
||||
"warning": "要注意",
|
||||
"error": "対応が必要"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "再実行",
|
||||
"exportBundle": "パッケージをエクスポート"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "診断の読み込みに失敗しました: {message}",
|
||||
"repairSuccess": "キャッシュの再構築が完了しました。",
|
||||
"repairFailed": "キャッシュの再構築に失敗しました: {message}",
|
||||
"exportSuccess": "診断パッケージをエクスポートしました。",
|
||||
"exportFailed": "診断パッケージのエクスポートに失敗しました: {message}",
|
||||
"conflictsResolved": "{count} 件のファイル名競合が解決されました。",
|
||||
"conflictsResolveFailed": "ファイル名競合の解決に失敗しました: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "アプリケーション更新が検出されました",
|
||||
|
||||
230
locales/ko.json
230
locales/ko.json
@@ -15,7 +15,8 @@
|
||||
"settings": "설정",
|
||||
"help": "도움말",
|
||||
"add": "추가",
|
||||
"close": "닫기"
|
||||
"close": "닫기",
|
||||
"menu": "메뉴"
|
||||
},
|
||||
"status": {
|
||||
"loading": "로딩 중...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "{count}개의 레시피가 성공적으로 복구되었습니다.",
|
||||
"cancelled": "수리가 취소되었습니다. {count}개의 레시피가 수리되었습니다.",
|
||||
"error": "레시피 복구 실패: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "제외된 모델 관리"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "프리셋 \"{name}\"이(가) 이미 존재합니다. 덮어쓰시겠습니까?",
|
||||
"presetNamePlaceholder": "프리셋 이름...",
|
||||
"baseModel": "베이스 모델",
|
||||
"baseModelSearchPlaceholder": "베이스 모델 검색...",
|
||||
"modelTags": "태그 (상위 20개)",
|
||||
"modelTypes": "모델 유형",
|
||||
"license": "라이선스",
|
||||
"noCreditRequired": "크레딧 표기 없음",
|
||||
"allowSellingGeneratedContent": "판매 허용",
|
||||
"noTags": "태그 없음",
|
||||
"noBaseModelMatches": "현재 검색과 일치하는 베이스 모델이 없습니다.",
|
||||
"clearAll": "모든 필터 지우기",
|
||||
"any": "아무",
|
||||
"all": "모두",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai API 키",
|
||||
"civitaiApiKeyPlaceholder": "Civitai API 키를 입력하세요",
|
||||
"civitaiApiKeyHelp": "Civitai에서 모델을 다운로드할 때 인증에 사용됩니다",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai 호스트",
|
||||
"help": "\"View on Civitai\" 링크를 사용할 때 어떤 Civitai 사이트를 열지 선택합니다.",
|
||||
"options": {
|
||||
"com": "civitai.com(SFW 전용)",
|
||||
"red": "civitai.red(무제한)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "다운로드 백엔드",
|
||||
"help": "모델 파일을 다운로드하는 방식을 선택합니다. Python은 내장 다운로더를 사용하고, aria2는 실험적인 외부 다운로더 프로세스를 사용합니다.",
|
||||
"options": {
|
||||
"python": "Python(내장)",
|
||||
"aria2": "aria2(실험적)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c 경로",
|
||||
"help": "aria2c 실행 파일의 선택적 경로입니다. 비워 두면 시스템 PATH의 aria2c를 사용합니다.",
|
||||
"placeholder": "비워 두면 PATH의 aria2c를 사용합니다"
|
||||
},
|
||||
"aria2HelpLink": "aria2 다운로드 백엔드 설정 방법 알아보기",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Civitai 호스트 기본 설정 사용 가능",
|
||||
"content": "이제 Civitai는 SFW 콘텐츠에 civitai.com을, 무제한 콘텐츠에 civitai.red를 사용합니다. 설정에서 기본으로 열 사이트를 변경할 수 있습니다.",
|
||||
"openSettings": "설정 열기"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "설정 폴더 열기",
|
||||
"tooltip": "settings.json이 있는 폴더를 엽니다",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "콘텐츠 필터링",
|
||||
"downloads": "다운로드",
|
||||
"videoSettings": "비디오 설정",
|
||||
"layoutSettings": "레이아웃 설정",
|
||||
"misc": "기타",
|
||||
"backup": "백업",
|
||||
"folderSettings": "기본 루트",
|
||||
"recipeSettings": "레시피",
|
||||
"extraFolderPaths": "추가 폴다 경로",
|
||||
"downloadPathTemplates": "다운로드 경로 템플릿",
|
||||
"priorityTags": "우선순위 태그",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "NSFW 콘텐츠 블러 처리",
|
||||
"blurNsfwContentHelp": "성인(NSFW) 콘텐츠 미리보기 이미지를 블러 처리합니다",
|
||||
"showOnlySfw": "SFW 결과만 표시",
|
||||
"showOnlySfwHelp": "탐색 및 검색 시 모든 NSFW 콘텐츠를 필터링합니다"
|
||||
"showOnlySfwHelp": "탐색 및 검색 시 모든 NSFW 콘텐츠를 필터링합니다",
|
||||
"matureBlurThreshold": "성인 콘텐츠 블러 임계값",
|
||||
"matureBlurThresholdHelp": "NSFW 블러가 활성화될 때 어떤 등급 레벨부터 블러 필터링을 시작할지 설정합니다.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 이상",
|
||||
"r": "R 이상(기본값)",
|
||||
"x": "X 이상",
|
||||
"xxx": "XXX만"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "호버 시 비디오 자동 재생",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "건너뛰기 경로를 저장할 수 없습니다: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "자동 백업",
|
||||
"autoEnabledHelp": "하루에 한 번 로컬 스냅샷을 만들고 보존 정책에 따라 최신 스냅샷을 유지합니다.",
|
||||
"retention": "보존 개수",
|
||||
"retentionHelp": "오래된 자동 스냅샷을 삭제하기 전에 몇 개를 유지할지 지정합니다.",
|
||||
"management": "백업 관리",
|
||||
"managementHelp": "현재 사용자 상태를 내보내거나 백업 아카이브에서 복원합니다.",
|
||||
"scopeHelp": "설정, 다운로드 기록, 모델 업데이트 상태를 백업합니다. 모델 파일과 다시 생성할 수 있는 캐시는 포함되지 않습니다.",
|
||||
"locationSummary": "현재 백업 위치",
|
||||
"openFolderButton": "백업 폴더 열기",
|
||||
"openFolderSuccess": "백업 폴더를 열었습니다",
|
||||
"openFolderFailed": "백업 폴더를 열지 못했습니다",
|
||||
"locationCopied": "백업 경로를 클립보드에 복사했습니다: {{path}}",
|
||||
"locationClipboardFallback": "백업 경로: {{path}}",
|
||||
"exportButton": "백업 내보내기",
|
||||
"exportSuccess": "백업을 성공적으로 내보냈습니다.",
|
||||
"exportFailed": "백업 내보내기에 실패했습니다: {message}",
|
||||
"importButton": "백업 가져오기",
|
||||
"importConfirm": "이 백업을 가져와서 로컬 사용자 상태를 덮어쓰시겠습니까?",
|
||||
"importSuccess": "백업을 성공적으로 가져왔습니다.",
|
||||
"importFailed": "백업 가져오기에 실패했습니다: {message}",
|
||||
"latestSnapshot": "최근 스냅샷",
|
||||
"latestAutoSnapshot": "최근 자동 스냅샷",
|
||||
"snapshotCount": "저장된 스냅샷",
|
||||
"noneAvailable": "아직 스냅샷이 없습니다"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "기본 모델 다운로드 건너뛰기",
|
||||
"help": "모든 다운로드 흐름에 적용됩니다. 여기서는 지원되는 기본 모델만 선택할 수 있습니다.",
|
||||
"searchPlaceholder": "기본 모델 필터링...",
|
||||
"empty": "현재 검색과 일치하는 기본 모델이 없습니다.",
|
||||
"summary": {
|
||||
"none": "선택 없음",
|
||||
"count": "{count}개 선택됨"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "편집",
|
||||
"collapse": "접기",
|
||||
"clear": "지우기"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "제외된 기본 모델을 저장할 수 없습니다: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "이전에 다운로드한 모델 버전 건너뛰기",
|
||||
"help": "활성화하면 다운로드 기록 서비스가 해당 버전이 이미 다운로드되었음을 기록한 경우 LoRA Manager는 해당 모델 버전 다운로드를 건너뜁니다. 모든 다운로드 플로우에 적용됩니다."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "표시 밀도",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "호버 시 표시"
|
||||
},
|
||||
"cardInfoDisplayHelp": "모델 정보 및 액션 버튼을 언제 표시할지 선택하세요",
|
||||
"showVersionOnCard": "카드에 버전 표시",
|
||||
"showVersionOnCardHelp": "모델 카드에 버전 이름 표시 여부를 전환합니다",
|
||||
"modelCardFooterAction": "모델 카드 버튼 동작",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "예시 이미지 열기",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "다운로드, 가져오기 및 이동을 위한 기본 Diffusion Model (UNET) 루트 디렉토리를 설정합니다",
|
||||
"defaultEmbeddingRoot": "Embedding 루트",
|
||||
"defaultEmbeddingRootHelp": "다운로드, 가져오기 및 이동을 위한 기본 Embedding 루트 디렉토리를 설정합니다",
|
||||
"recipesPath": "레시피 저장 경로",
|
||||
"recipesPathHelp": "저장된 레시피를 위한 선택적 사용자 지정 디렉터리입니다. 비워 두면 첫 번째 LoRA 루트의 recipes 폴더를 사용합니다.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "레시피 저장 경로를 이동 중...",
|
||||
"noDefault": "기본값 없음"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "추가 폴다 경로",
|
||||
"help": "ComfyUI의 표준 경로 외부에 추가 모델 폴드를 추가하세요. 이러한 경로는 별도로 저장되며 기본 폴와 함께 스캔됩니다.",
|
||||
"description": "모델을 스캔하기 위한 추가 폴를 설정하세요. 이러한 경로는 LoRA Manager 특유의 것이며 ComfyUI의 기본 경로와 병합됩니다.",
|
||||
"description": "LoRA Manager 전용 추가 모델 루트 경로입니다. ComfyUI의 표준 폴더 외부 위치에서 모델을 로드하여 대규모 라이브러리로 인한 성능 저하를 방지합니다.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA 경로",
|
||||
"checkpoint": "Checkpoint 경로",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embedding 경로"
|
||||
},
|
||||
"pathPlaceholder": "/추가/모델/경로",
|
||||
"saveSuccess": "추가 폴다 경로가 업데이트되었습니다.",
|
||||
"saveSuccess": "추가 폴다 경로가 업데이트되었습니다. 변경 사항을 적용하려면 재시작이 필요합니다.",
|
||||
"saveError": "추가 폴다 경로 업데이트 실패: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "이 경로는 이미 구성되어 있습니다"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Civitai의 예시 이미지가 저장될 폴더 경로를 입력하세요",
|
||||
"autoDownload": "예시 이미지 자동 다운로드",
|
||||
"autoDownloadHelp": "예시 이미지가 없는 모델의 예시 이미지를 자동으로 다운로드합니다 (다운로드 위치 설정 필요)",
|
||||
"openMode": "예시 이미지 열기 동작",
|
||||
"openModeHelp": "서버에서 열지, 매핑된 로컬 경로를 복사할지, 사용자 지정 URI를 실행할지 선택합니다.",
|
||||
"openModeOptions": {
|
||||
"system": "서버에서 열기",
|
||||
"clipboard": "로컬 경로 복사",
|
||||
"uriTemplate": "사용자 지정 URI 열기"
|
||||
},
|
||||
"localRoot": "로컬 예시 이미지 루트",
|
||||
"localRootHelp": "서버 예시 이미지 디렉터리를 반영하는 선택적 로컬 또는 마운트된 루트입니다. 비워 두면 서버 경로를 재사용합니다.",
|
||||
"localRootPlaceholder": "예: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "URI 템플릿 열기",
|
||||
"uriTemplateHelp": "파일 URI 또는 Shortcuts 링크 같은 사용자 지정 딥링크를 사용합니다.",
|
||||
"uriTemplatePlaceholder": "예: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "사용 가능한 플레이스홀더: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "원격 열기 모드에 대해 자세히 알아보기",
|
||||
"optimizeImages": "다운로드된 이미지 최적화",
|
||||
"optimizeImagesHelp": "파일 크기를 줄이고 로딩 속도를 향상시키기 위해 예시 이미지를 최적화합니다 (메타데이터는 보존됨)",
|
||||
"download": "다운로드",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "자동 정리 선택",
|
||||
"skipMetadataRefresh": "선택한 모델의 메타데이터 새로고침 건너뛰기",
|
||||
"resumeMetadataRefresh": "선택한 모델의 메타데이터 새로고침 재개",
|
||||
"deleteAll": "모든 모델 삭제",
|
||||
"deleteAll": "선택된 항목 삭제",
|
||||
"downloadMissingLoras": "누락된 LoRA 다운로드",
|
||||
"clear": "선택 지우기",
|
||||
"skipMetadataRefreshCount": "건너뛰기({count}개 모델)",
|
||||
"resumeMetadataRefreshCount": "재개({count}개 모델)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "폴더로 이동",
|
||||
"repairMetadata": "메타데이터 복구",
|
||||
"excludeModel": "모델 제외",
|
||||
"restoreModel": "모델 복원",
|
||||
"deleteModel": "모델 삭제",
|
||||
"shareRecipe": "레시피 공유",
|
||||
"viewAllLoras": "모든 LoRA 보기",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "{otherType} 폴더로 이동"
|
||||
"moveToOtherTypeFolder": "{otherType} 폴더로 이동",
|
||||
"sendToWorkflow": "워크플로우로 전송"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "사이드바 고정 해제",
|
||||
"switchToListView": "목록 보기로 전환",
|
||||
"switchToTreeView": "트리 보기로 전환",
|
||||
"recursiveOn": "하위 폴더 검색",
|
||||
"recursiveOff": "현재 폴더만 검색",
|
||||
"recursiveOn": "하위 폴더 포함",
|
||||
"recursiveOff": "현재 폴더만",
|
||||
"recursiveUnavailable": "재귀 검색은 트리 보기에서만 사용할 수 있습니다",
|
||||
"collapseAllDisabled": "목록 보기에서는 사용할 수 없습니다",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "얼리 액세스",
|
||||
"earlyAccessTooltip": "얼리 액세스 필요",
|
||||
"inLibrary": "라이브러리에 있음",
|
||||
"downloaded": "다운로드됨",
|
||||
"downloadedTooltip": "이전에 다운로드했지만 현재 라이브러리에 없습니다.",
|
||||
"alreadyInLibrary": "이미 라이브러리에 있음",
|
||||
"autoOrganizedPath": "[경로 템플릿으로 자동 정리됨]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "베이스 모델 업데이트",
|
||||
"cancel": "취소"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "누락된 LoRA 다운로드",
|
||||
"message": "선택한 레시피에서 총 {totalCount}개 중 {uniqueCount}개의 고유한 누락된 LoRA를 찾았습니다.",
|
||||
"previewTitle": "다운로드할 LoRA:",
|
||||
"moreItems": "...그리고 {count}개 더",
|
||||
"note": "파일은 기본 경로 템플릿을 사용하여 다운로드됩니다. LoRA의 수에 따라 다소 시간이 걸릴 수 있습니다.",
|
||||
"downloadButton": "{count}개 LoRA 다운로드"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "로컬 예시 이미지",
|
||||
"message": "이 모델의 로컬 예시 이미지를 찾을 수 없습니다. 보기 옵션:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Civitai에서 보기",
|
||||
"viewOnCivitaiText": "Civitai에서 보기",
|
||||
"viewCreatorProfile": "제작자 프로필 보기",
|
||||
"openFileLocation": "파일 위치 열기"
|
||||
"openFileLocation": "파일 위치 열기",
|
||||
"sendToWorkflow": "ComfyUI로 보내기",
|
||||
"sendToWorkflowText": "ComfyUI로 보내기"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "파일 위치가 성공적으로 열렸습니다",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "경로가 클립보드에 복사되었습니다: {{path}}",
|
||||
"clipboardFallback": "경로: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "ComfyUI로 보낼 수 없습니다: 파일 경로가 없습니다"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "버전",
|
||||
"fileName": "파일명",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "편집 취소",
|
||||
"save": "변경사항 저장",
|
||||
"addPlaceholder": "입력하거나 아래 제안을 클릭하세요",
|
||||
"editWord": "트리거 단어 편집",
|
||||
"editPlaceholder": "트리거 단어 편집",
|
||||
"copyWord": "트리거 단어 복사",
|
||||
"deleteWord": "트리거 단어 삭제",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "{count}일 후"
|
||||
},
|
||||
"badges": {
|
||||
"current": "현재 버전",
|
||||
"current": "열린 버전",
|
||||
"currentTooltip": "이 모달을 열 때 사용한 버전입니다",
|
||||
"inLibrary": "라이브러리에 있음",
|
||||
"inLibraryTooltip": "이 버전은 로컬 라이브러리에 있습니다",
|
||||
"downloaded": "다운로드됨",
|
||||
"downloadedTooltip": "이 버전은 이전에 다운로드되었지만 현재는 라이브러리에 없습니다",
|
||||
"newer": "최신 버전",
|
||||
"newerTooltip": "이 버전은 로컬의 최신 버전보다 더 새롭습니다",
|
||||
"earlyAccess": "얼리 액세스",
|
||||
"ignored": "무시됨"
|
||||
"earlyAccessTooltip": "이 버전은 현재 Civitai 얼리 액세스가 필요합니다",
|
||||
"ignored": "무시됨",
|
||||
"ignoredTooltip": "이 버전은 업데이트 알림이 비활성화되어 있습니다",
|
||||
"onSiteOnly": "사이트 내 전용",
|
||||
"onSiteOnlyTooltip": "이 버전은 Civitai 사이트 내에서만 사용 가능하며 다운로드할 수 없습니다"
|
||||
},
|
||||
"actions": {
|
||||
"download": "다운로드",
|
||||
"downloadTooltip": "이 버전 다운로드",
|
||||
"downloadEarlyAccessTooltip": "Civitai에서 이 얼리 액세스 버전 다운로드",
|
||||
"downloadNotAllowedTooltip": "이 버전은 Civitai 사이트 내에서만 사용 가능하며 다운로드할 수 없습니다",
|
||||
"delete": "삭제",
|
||||
"deleteTooltip": "이 로컬 버전 삭제",
|
||||
"ignore": "무시",
|
||||
"unignore": "무시 해제",
|
||||
"ignoreTooltip": "이 버전의 업데이트 알림 무시",
|
||||
"unignoreTooltip": "이 버전의 업데이트 알림 다시 받기",
|
||||
"viewVersionOnCivitai": "Civitai에서 버전 보기",
|
||||
"earlyAccessTooltip": "얼리 액세스 구매 필요",
|
||||
"resumeModelUpdates": "이 모델 업데이트 재개",
|
||||
"ignoreModelUpdates": "이 모델 업데이트 무시",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "레시피가 워크플로에서 교체되었습니다",
|
||||
"recipeFailedToSend": "레시피를 워크플로로 전송하지 못했습니다",
|
||||
"noMatchingNodes": "현재 워크플로에서 호환되는 노드가 없습니다",
|
||||
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다"
|
||||
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다",
|
||||
"modelUpdated": "모델이 워크플로에서 업데이트되었습니다",
|
||||
"modelFailed": "모델 노드 업데이트 실패"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "레시피",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "예시 이미지 폴더가 열렸습니다",
|
||||
"openingFolder": "예시 이미지 폴더를 여는 중",
|
||||
"failedToOpen": "예시 이미지 폴더 열기 실패",
|
||||
"copiedPath": "경로를 클립보드에 복사했습니다: {{path}}",
|
||||
"clipboardFallback": "경로: {{path}}",
|
||||
"copiedUri": "링크를 클립보드에 복사했습니다: {{uri}}",
|
||||
"uriClipboardFallback": "링크: {{uri}}",
|
||||
"setupRequired": "예시 이미지 저장소",
|
||||
"setupDescription": "사용자 지정 예시 이미지를 추가하려면 먼저 다운로드 위치를 설정해야 합니다.",
|
||||
"setupUsage": "이 경로는 다운로드한 예시 이미지와 사용자 지정 이미지 모두에 사용됩니다.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "버전을 선택해주세요",
|
||||
"versionExists": "이 버전은 이미 라이브러리에 있습니다",
|
||||
"downloadCompleted": "다운로드가 성공적으로 완료되었습니다",
|
||||
"downloadSkippedByBaseModel": "기본 모델 {baseModel}이(가) 제외되어 다운로드를 건너뛰었습니다",
|
||||
"autoOrganizeSuccess": "{count}개의 {type}에 대해 자동 정리가 성공적으로 완료되었습니다",
|
||||
"autoOrganizePartialSuccess": "자동 정리 완료: 전체 {total}개 중 {success}개 이동, {failures}개 실패",
|
||||
"autoOrganizeFailed": "자동 정리 실패: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "레시피 이름이 성공적으로 업데이트되었습니다",
|
||||
"tagsUpdated": "레시피 태그가 성공적으로 업데이트되었습니다",
|
||||
"sourceUrlUpdated": "소스 URL이 성공적으로 업데이트되었습니다",
|
||||
"promptUpdated": "프롬프트가 성공적으로 업데이트되었습니다",
|
||||
"negativePromptUpdated": "네거티브 프롬프트가 성공적으로 업데이트되었습니다",
|
||||
"promptEditorHint": "Enter 키를 눌러 저장, Shift+Enter로 새 줄",
|
||||
"noRecipeId": "사용 가능한 레시피 ID가 없습니다",
|
||||
"sendToWorkflowFailed": "워크플로우에 레시피 보내기 실패: {message}",
|
||||
"copyFailed": "레시피 문법 복사 오류: {message}",
|
||||
"noMissingLoras": "다운로드할 누락된 LoRA가 없습니다",
|
||||
"missingLorasInfoFailed": "누락된 LoRA 정보를 가져오는데 실패했습니다",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "선택한 레시피가 없습니다",
|
||||
"noMissingLorasInSelection": "선택한 레시피에서 누락된 LoRA를 찾을 수 없습니다",
|
||||
"noLoraRootConfigured": "LoRA 루트 디렉토리가 구성되지 않았습니다. 설정에서 기본 LoRA 루트를 설정하세요."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "선택된 모델이 없습니다",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "베이스 모델 매핑 저장 실패: {message}",
|
||||
"downloadTemplatesUpdated": "다운로드 경로 템플릿이 업데이트되었습니다",
|
||||
"downloadTemplatesFailed": "다운로드 경로 템플릿 저장 실패: {message}",
|
||||
"recipesPathUpdated": "레시피 저장 경로가 업데이트되었습니다",
|
||||
"recipesPathSaveFailed": "레시피 저장 경로 업데이트 실패: {message}",
|
||||
"settingsUpdated": "설정 업데이트됨: {setting}",
|
||||
"compactModeToggled": "컴팩트 모드 {state}",
|
||||
"settingSaveFailed": "설정 저장 실패: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "학습된 단어를 로딩할 수 없습니다",
|
||||
"tooLong": "트리거 단어는 100단어를 초과할 수 없습니다",
|
||||
"tooMany": "최대 30개의 트리거 단어만 허용됩니다",
|
||||
"tooLong": "트리거 단어는 500단어를 초과할 수 없습니다",
|
||||
"tooMany": "최대 100개의 트리거 단어만 허용됩니다",
|
||||
"alreadyExists": "이 트리거 단어는 이미 존재합니다",
|
||||
"updateSuccess": "트리거 단어가 성공적으로 업데이트되었습니다",
|
||||
"updateFailed": "트리거 단어 업데이트에 실패했습니다",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "{type} 삭제 실패: {message}",
|
||||
"excludeSuccess": "{type}이(가) 성공적으로 제외되었습니다",
|
||||
"excludeFailed": "{type} 제외 실패: {message}",
|
||||
"restoreSuccess": "{type} 복원 완료",
|
||||
"restoreFailed": "{type} 복원 실패: {message}",
|
||||
"fileNameUpdated": "파일명이 성공적으로 업데이트되었습니다",
|
||||
"fileRenameFailed": "파일 이름 변경 실패: {error}",
|
||||
"previewUpdated": "미리보기가 성공적으로 업데이트되었습니다",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "시스템 진단",
|
||||
"title": "닥터",
|
||||
"buttonTitle": "진단 및 일반적인 수정 실행",
|
||||
"loading": "환경을 확인하는 중...",
|
||||
"footer": "수리 후에도 문제가 계속되면 진단 번들을 내보내세요.",
|
||||
"summary": {
|
||||
"idle": "설정, 캐시 무결성, UI 일관성에 대한 상태 검사를 실행합니다.",
|
||||
"ok": "현재 환경에서 활성 문제를 찾지 못했습니다.",
|
||||
"warning": "{count}개의 문제가 발견되었습니다. 대부분은 이 패널에서 바로 해결할 수 있습니다.",
|
||||
"error": "앱이 완전히 정상 상태가 되기 전에 {count}개의 문제를 처리해야 합니다."
|
||||
},
|
||||
"status": {
|
||||
"ok": "정상",
|
||||
"warning": "주의 필요",
|
||||
"error": "조치 필요"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "다시 실행",
|
||||
"exportBundle": "번들 내보내기"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "진단 로드 실패: {message}",
|
||||
"repairSuccess": "캐시 재구성이 완료되었습니다.",
|
||||
"repairFailed": "캐시 재구성 실패: {message}",
|
||||
"exportSuccess": "진단 번들이 내보내졌습니다.",
|
||||
"exportFailed": "진단 번들 내보내기 실패: {message}",
|
||||
"conflictsResolved": "{count}개 파일명 충돌이 해결되었습니다.",
|
||||
"conflictsResolveFailed": "파일명 충돌 해결 실패: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "애플리케이션 업데이트 감지",
|
||||
|
||||
230
locales/ru.json
230
locales/ru.json
@@ -15,7 +15,8 @@
|
||||
"settings": "Настройки",
|
||||
"help": "Справка",
|
||||
"add": "Добавить",
|
||||
"close": "Закрыть"
|
||||
"close": "Закрыть",
|
||||
"menu": "Меню"
|
||||
},
|
||||
"status": {
|
||||
"loading": "Загрузка...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "Успешно восстановлено {count} рецептов.",
|
||||
"cancelled": "Восстановление отменено. {count} рецептов было восстановлено.",
|
||||
"error": "Ошибка восстановления рецептов: {message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "Управление исключёнными моделями"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "Пресет \"{name}\" уже существует. Перезаписать?",
|
||||
"presetNamePlaceholder": "Имя пресета...",
|
||||
"baseModel": "Базовая модель",
|
||||
"baseModelSearchPlaceholder": "Поиск базовых моделей...",
|
||||
"modelTags": "Теги (Топ 20)",
|
||||
"modelTypes": "Типы моделей",
|
||||
"license": "Лицензия",
|
||||
"noCreditRequired": "Без указания авторства",
|
||||
"allowSellingGeneratedContent": "Продажа разрешена",
|
||||
"noTags": "Без тегов",
|
||||
"noBaseModelMatches": "Нет базовых моделей, соответствующих текущему поиску.",
|
||||
"clearAll": "Очистить все фильтры",
|
||||
"any": "Любой",
|
||||
"all": "Все",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Ключ API Civitai",
|
||||
"civitaiApiKeyPlaceholder": "Введите ваш ключ API Civitai",
|
||||
"civitaiApiKeyHelp": "Используется для аутентификации при загрузке моделей с Civitai",
|
||||
"civitaiHost": {
|
||||
"label": "Хост Civitai",
|
||||
"help": "Выберите, какой сайт Civitai будет открываться при использовании ссылок «View on Civitai».",
|
||||
"options": {
|
||||
"com": "civitai.com (только SFW)",
|
||||
"red": "civitai.red (без ограничений)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "Бэкенд загрузки",
|
||||
"help": "Выберите способ загрузки файлов моделей. Python использует встроенный загрузчик. aria2 использует экспериментальный внешний процесс загрузки.",
|
||||
"options": {
|
||||
"python": "Python (встроенный)",
|
||||
"aria2": "aria2 (экспериментальный)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "Путь к aria2c",
|
||||
"help": "Необязательный путь к исполняемому файлу aria2c. Оставьте пустым, чтобы использовать aria2c из системного PATH.",
|
||||
"placeholder": "Оставьте пустым, чтобы использовать aria2c из PATH"
|
||||
},
|
||||
"aria2HelpLink": "Узнайте, как настроить сервер загрузки aria2",
|
||||
"civitaiHostBanner": {
|
||||
"title": "Доступна настройка хоста Civitai",
|
||||
"content": "Теперь Civitai использует civitai.com для контента SFW и civitai.red для контента без ограничений. В настройках можно изменить, какой сайт открывать по умолчанию.",
|
||||
"openSettings": "Открыть настройки"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "Открыть папку настроек",
|
||||
"tooltip": "Открыть папку, содержащую settings.json",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "Фильтрация контента",
|
||||
"downloads": "Загрузки",
|
||||
"videoSettings": "Настройки видео",
|
||||
"layoutSettings": "Настройки макета",
|
||||
"misc": "Разное",
|
||||
"backup": "Резервные копии",
|
||||
"folderSettings": "Корневые папки",
|
||||
"recipeSettings": "Рецепты",
|
||||
"extraFolderPaths": "Дополнительные пути к папкам",
|
||||
"downloadPathTemplates": "Шаблоны путей загрузки",
|
||||
"priorityTags": "Приоритетные теги",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "Размывать NSFW контент",
|
||||
"blurNsfwContentHelp": "Размывать превью изображений контента для взрослых (NSFW)",
|
||||
"showOnlySfw": "Показывать только SFW результаты",
|
||||
"showOnlySfwHelp": "Фильтровать весь NSFW контент при просмотре и поиске"
|
||||
"showOnlySfwHelp": "Фильтровать весь NSFW контент при просмотре и поиске",
|
||||
"matureBlurThreshold": "Порог размытия взрослого контента",
|
||||
"matureBlurThresholdHelp": "Установить, с какого уровня рейтинга начинается размытие при включенном размытии NSFW.",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 и выше",
|
||||
"r": "R и выше (по умолчанию)",
|
||||
"x": "X и выше",
|
||||
"xxx": "Только XXX"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "Автовоспроизведение видео при наведении",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "Не удалось сохранить пути для пропуска: {message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "Автоматические резервные копии",
|
||||
"autoEnabledHelp": "Создаёт локальный снимок раз в день и хранит последние снимки согласно политике хранения.",
|
||||
"retention": "Количество хранения",
|
||||
"retentionHelp": "Сколько автоматических снимков сохранять перед удалением старых.",
|
||||
"management": "Управление резервными копиями",
|
||||
"managementHelp": "Экспортируйте текущее состояние пользователя или восстановите его из архива резервной копии.",
|
||||
"scopeHelp": "Резервная копия включает ваши настройки, историю загрузок и состояние обновлений моделей. Файлы моделей и пересоздаваемые кэши не входят.",
|
||||
"locationSummary": "Текущее расположение резервных копий",
|
||||
"openFolderButton": "Открыть папку резервных копий",
|
||||
"openFolderSuccess": "Папка резервных копий открыта",
|
||||
"openFolderFailed": "Не удалось открыть папку резервных копий",
|
||||
"locationCopied": "Путь к резервной копии скопирован в буфер обмена: {{path}}",
|
||||
"locationClipboardFallback": "Путь к резервной копии: {{path}}",
|
||||
"exportButton": "Экспортировать резервную копию",
|
||||
"exportSuccess": "Резервная копия успешно экспортирована.",
|
||||
"exportFailed": "Не удалось экспортировать резервную копию: {message}",
|
||||
"importButton": "Импортировать резервную копию",
|
||||
"importConfirm": "Импортировать эту резервную копию и перезаписать локальное состояние пользователя?",
|
||||
"importSuccess": "Резервная копия успешно импортирована.",
|
||||
"importFailed": "Не удалось импортировать резервную копию: {message}",
|
||||
"latestSnapshot": "Последний снимок",
|
||||
"latestAutoSnapshot": "Последний автоматический снимок",
|
||||
"snapshotCount": "Сохранённые снимки",
|
||||
"noneAvailable": "Снимков пока нет"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "Пропускать загрузки для базовых моделей",
|
||||
"help": "Применяется ко всем сценариям загрузки. Здесь можно выбрать только поддерживаемые базовые модели.",
|
||||
"searchPlaceholder": "Фильтровать базовые модели...",
|
||||
"empty": "Нет базовых моделей, соответствующих текущему поиску.",
|
||||
"summary": {
|
||||
"none": "Ничего не выбрано",
|
||||
"count": "Выбрано: {count}"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "Изменить",
|
||||
"collapse": "Свернуть",
|
||||
"clear": "Очистить"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "Не удалось сохранить исключённые базовые модели: {message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "Пропускать ранее загруженные версии моделей",
|
||||
"help": "Если включено, LoRA Manager будет пропускать загрузку версии модели, если сервис истории загрузок записал, что эта конкретная версия уже загружена. Применяется ко всем потокам загрузки."
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Плотность отображения",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "Показать при наведении"
|
||||
},
|
||||
"cardInfoDisplayHelp": "Выберите когда отображать информацию о модели и кнопки действий",
|
||||
"showVersionOnCard": "Показывать версию на карточке",
|
||||
"showVersionOnCardHelp": "Показать или скрыть название версии на карточках моделей",
|
||||
"modelCardFooterAction": "Действие кнопки карточки модели",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "Открыть примеры изображений",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "Установить корневую папку Diffusion Model (UNET) по умолчанию для загрузок, импорта и перемещений",
|
||||
"defaultEmbeddingRoot": "Корневая папка Embedding",
|
||||
"defaultEmbeddingRootHelp": "Установить корневую папку embedding по умолчанию для загрузок, импорта и перемещений",
|
||||
"recipesPath": "Путь хранения рецептов",
|
||||
"recipesPathHelp": "Дополнительный пользовательский каталог для сохранённых рецептов. Оставьте пустым, чтобы использовать папку recipes в первом корне LoRA.",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "Перенос хранилища рецептов...",
|
||||
"noDefault": "Не задано"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "Дополнительные пути к папкам",
|
||||
"help": "Добавьте дополнительные папки моделей за пределами стандартных путей ComfyUI. Эти пути хранятся отдельно и сканируются вместе с папками по умолчанию.",
|
||||
"description": "Настройте дополнительные папки для сканирования моделей. Эти пути специфичны для LoRA Manager и будут объединены с путями по умолчанию ComfyUI.",
|
||||
"description": "Дополнительные корневые пути моделей, эксклюзивные для LoRA Manager. Загружайте модели из расположений за пределами стандартных папок ComfyUI — идеально подходит для больших библиотек, которые иначе замедлили бы ComfyUI.",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "Пути LoRA",
|
||||
"checkpoint": "Пути Checkpoint",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Пути Embedding"
|
||||
},
|
||||
"pathPlaceholder": "/путь/к/дополнительным/моделям",
|
||||
"saveSuccess": "Дополнительные пути к папкам обновлены.",
|
||||
"saveSuccess": "Дополнительные пути к папкам обновлены. Требуется перезапуск для применения изменений.",
|
||||
"saveError": "Не удалось обновить дополнительные пути к папкам: {message}",
|
||||
"validation": {
|
||||
"duplicatePath": "Этот путь уже настроен"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "Введите путь к папке, где будут сохраняться примеры изображений с Civitai",
|
||||
"autoDownload": "Автозагрузка примеров изображений",
|
||||
"autoDownloadHelp": "Автоматически загружать примеры изображений для моделей, у которых их нет (требует настройки места загрузки)",
|
||||
"openMode": "Действие открытия примеров изображений",
|
||||
"openModeHelp": "Выберите, будет ли действие открывать папку на сервере, копировать сопоставленный локальный путь или запускать пользовательский URI.",
|
||||
"openModeOptions": {
|
||||
"system": "Открыть на сервере",
|
||||
"clipboard": "Скопировать локальный путь",
|
||||
"uriTemplate": "Открыть пользовательский URI"
|
||||
},
|
||||
"localRoot": "Локальный корень примеров изображений",
|
||||
"localRootHelp": "Необязательный локальный или смонтированный корневой путь, отражающий каталог примеров изображений на сервере. Если оставить пустым, будет использован путь сервера.",
|
||||
"localRootPlaceholder": "Пример: /Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "Шаблон URI для открытия",
|
||||
"uriTemplateHelp": "Используйте пользовательскую deep link-ссылку, например file URI или ссылку Shortcuts.",
|
||||
"uriTemplatePlaceholder": "Пример: shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "Доступные плейсхолдеры: {{local_path}}, {{encoded_local_path}}, {{relative_path}}, {{encoded_relative_path}}, {{file_uri}}, {{encoded_file_uri}}",
|
||||
"openModeWikiLink": "Подробнее об удаленных режимах открытия",
|
||||
"optimizeImages": "Оптимизировать загруженные изображения",
|
||||
"optimizeImagesHelp": "Оптимизировать примеры изображений для уменьшения размера файла и улучшения скорости загрузки (метаданные будут сохранены)",
|
||||
"download": "Загрузить",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "Автоматически организовать выбранные",
|
||||
"skipMetadataRefresh": "Пропустить обновление метаданных для выбранных",
|
||||
"resumeMetadataRefresh": "Возобновить обновление метаданных для выбранных",
|
||||
"deleteAll": "Удалить все модели",
|
||||
"deleteAll": "Удалить выбранные",
|
||||
"downloadMissingLoras": "Скачать отсутствующие LoRAs",
|
||||
"clear": "Очистить выбор",
|
||||
"skipMetadataRefreshCount": "Пропустить({count} моделей)",
|
||||
"resumeMetadataRefreshCount": "Возобновить({count} моделей)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "Переместить в папку",
|
||||
"repairMetadata": "Восстановить метаданные",
|
||||
"excludeModel": "Исключить модель",
|
||||
"restoreModel": "Восстановить модель",
|
||||
"deleteModel": "Удалить модель",
|
||||
"shareRecipe": "Поделиться рецептом",
|
||||
"viewAllLoras": "Посмотреть все LoRAs",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "Переместить в папку {otherType}"
|
||||
"moveToOtherTypeFolder": "Переместить в папку {otherType}",
|
||||
"sendToWorkflow": "Отправить в workflow"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "Открепить боковую панель",
|
||||
"switchToListView": "Переключить на вид списка",
|
||||
"switchToTreeView": "Переключить на древовидный вид",
|
||||
"recursiveOn": "Искать во вложенных папках",
|
||||
"recursiveOff": "Искать только в текущей папке",
|
||||
"recursiveOn": "Включать вложенные папки",
|
||||
"recursiveOff": "Только текущая папка",
|
||||
"recursiveUnavailable": "Рекурсивный поиск доступен только в режиме дерева",
|
||||
"collapseAllDisabled": "Недоступно в виде списка",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "Ранний доступ",
|
||||
"earlyAccessTooltip": "Требуется ранний доступ",
|
||||
"inLibrary": "В библиотеке",
|
||||
"downloaded": "Загружено",
|
||||
"downloadedTooltip": "Ранее загружено, но сейчас этого нет в вашей библиотеке.",
|
||||
"alreadyInLibrary": "Уже в библиотеке",
|
||||
"autoOrganizedPath": "[Автоматически организовано по шаблону пути]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "Обновить базовую модель",
|
||||
"cancel": "Отмена"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "Скачать отсутствующие LoRAs",
|
||||
"message": "Найдено {uniqueCount} уникальных отсутствующих LoRAs (из {totalCount} всего в выбранных рецептах).",
|
||||
"previewTitle": "LoRAs для скачивания:",
|
||||
"moreItems": "...и еще {count}",
|
||||
"note": "Файлы будут скачаны с использованием шаблонов путей по умолчанию. Это может занять некоторое время в зависимости от количества LoRAs.",
|
||||
"downloadButton": "Скачать {count} LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "Локальные примеры изображений",
|
||||
"message": "Локальные примеры изображений для этой модели не найдены. Варианты просмотра:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "Посмотреть на Civitai",
|
||||
"viewOnCivitaiText": "Посмотреть на Civitai",
|
||||
"viewCreatorProfile": "Посмотреть профиль создателя",
|
||||
"openFileLocation": "Открыть расположение файла"
|
||||
"openFileLocation": "Открыть расположение файла",
|
||||
"sendToWorkflow": "Отправить в ComfyUI",
|
||||
"sendToWorkflowText": "Отправить в ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "Расположение файла успешно открыто",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "Путь скопирован в буфер обмена: {{path}}",
|
||||
"clipboardFallback": "Путь: {{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "Невозможно отправить в ComfyUI: путь к файлу недоступен"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "Версия",
|
||||
"fileName": "Имя файла",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "Отменить редактирование",
|
||||
"save": "Сохранить изменения",
|
||||
"addPlaceholder": "Введите для добавления или нажмите на предложения ниже",
|
||||
"editWord": "Редактировать триггерное слово",
|
||||
"editPlaceholder": "Редактировать триггерное слово",
|
||||
"copyWord": "Копировать триггерное слово",
|
||||
"deleteWord": "Удалить триггерное слово",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "через {count}д"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Текущая версия",
|
||||
"current": "Открытая версия",
|
||||
"currentTooltip": "Это версия, с которой было открыто это окно",
|
||||
"inLibrary": "В библиотеке",
|
||||
"inLibraryTooltip": "Эта версия есть в вашей локальной библиотеке",
|
||||
"downloaded": "Загружено",
|
||||
"downloadedTooltip": "Эта версия уже загружалась, но сейчас отсутствует в вашей библиотеке",
|
||||
"newer": "Более новая версия",
|
||||
"newerTooltip": "Эта версия новее вашей последней локальной версии",
|
||||
"earlyAccess": "Ранний доступ",
|
||||
"ignored": "Игнорируется"
|
||||
"earlyAccessTooltip": "Для этой версии сейчас требуется ранний доступ Civitai",
|
||||
"ignored": "Игнорируется",
|
||||
"ignoredTooltip": "Уведомления об обновлениях для этой версии отключены",
|
||||
"onSiteOnly": "Только на Сайте",
|
||||
"onSiteOnlyTooltip": "Эта версия доступна только для генерации на сайте Civitai"
|
||||
},
|
||||
"actions": {
|
||||
"download": "Скачать",
|
||||
"downloadTooltip": "Скачать эту версию",
|
||||
"downloadEarlyAccessTooltip": "Скачать эту версию раннего доступа с Civitai",
|
||||
"downloadNotAllowedTooltip": "Эта версия доступна только для генерации на сайте Civitai",
|
||||
"delete": "Удалить",
|
||||
"deleteTooltip": "Удалить эту локальную версию",
|
||||
"ignore": "Игнорировать",
|
||||
"unignore": "Перестать игнорировать",
|
||||
"ignoreTooltip": "Игнорировать уведомления об обновлениях для этой версии",
|
||||
"unignoreTooltip": "Возобновить уведомления об обновлениях для этой версии",
|
||||
"viewVersionOnCivitai": "Посмотреть версию на Civitai",
|
||||
"earlyAccessTooltip": "Требуется покупка раннего доступа",
|
||||
"resumeModelUpdates": "Возобновить обновления для этой модели",
|
||||
"ignoreModelUpdates": "Игнорировать обновления для этой модели",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "Рецепт заменён в workflow",
|
||||
"recipeFailedToSend": "Не удалось отправить рецепт в workflow",
|
||||
"noMatchingNodes": "В текущем workflow нет совместимых узлов",
|
||||
"noTargetNodeSelected": "Целевой узел не выбран"
|
||||
"noTargetNodeSelected": "Целевой узел не выбран",
|
||||
"modelUpdated": "Модель обновлена в workflow",
|
||||
"modelFailed": "Не удалось обновить узел модели"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Рецепт",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "Папка с примерами изображений открыта",
|
||||
"openingFolder": "Открытие папки с примерами изображений",
|
||||
"failedToOpen": "Не удалось открыть папку с примерами изображений",
|
||||
"copiedPath": "Путь скопирован в буфер обмена: {{path}}",
|
||||
"clipboardFallback": "Путь: {{path}}",
|
||||
"copiedUri": "Ссылка скопирована в буфер обмена: {{uri}}",
|
||||
"uriClipboardFallback": "Ссылка: {{uri}}",
|
||||
"setupRequired": "Хранилище примеров изображений",
|
||||
"setupDescription": "Чтобы добавить собственные примеры изображений, сначала нужно установить место загрузки.",
|
||||
"setupUsage": "Этот путь используется как для загруженных, так и для пользовательских примеров изображений.",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "Пожалуйста, выберите версию",
|
||||
"versionExists": "Эта версия уже существует в вашей библиотеке",
|
||||
"downloadCompleted": "Загрузка успешно завершена",
|
||||
"downloadSkippedByBaseModel": "Загрузка пропущена, потому что базовая модель {baseModel} исключена",
|
||||
"autoOrganizeSuccess": "Автоматическая организация успешно завершена для {count} {type}",
|
||||
"autoOrganizePartialSuccess": "Автоматическая организация завершена: перемещено {success}, не удалось {failures} из {total} моделей",
|
||||
"autoOrganizeFailed": "Ошибка автоматической организации: {error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "Название рецепта успешно обновлено",
|
||||
"tagsUpdated": "Теги рецепта успешно обновлены",
|
||||
"sourceUrlUpdated": "Исходный URL успешно обновлен",
|
||||
"promptUpdated": "Промпт успешно обновлён",
|
||||
"negativePromptUpdated": "Негативный промпт успешно обновлён",
|
||||
"promptEditorHint": "Нажмите Enter для сохранения, Shift+Enter для новой строки",
|
||||
"noRecipeId": "ID рецепта недоступен",
|
||||
"sendToWorkflowFailed": "Не удалось отправить рецепт в рабочий процесс: {message}",
|
||||
"copyFailed": "Ошибка копирования синтаксиса рецепта: {message}",
|
||||
"noMissingLoras": "Нет отсутствующих LoRAs для загрузки",
|
||||
"missingLorasInfoFailed": "Не удалось получить информацию для отсутствующих LoRAs",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "Please enter at least one URL or file path",
|
||||
"batchImportNoDirectory": "Please enter a directory path",
|
||||
"batchImportBrowseFailed": "Failed to browse directory: {message}",
|
||||
"batchImportDirectorySelected": "Directory selected: {path}"
|
||||
"batchImportDirectorySelected": "Directory selected: {path}",
|
||||
"noRecipesSelected": "Рецепты не выбраны",
|
||||
"noMissingLorasInSelection": "В выбранных рецептах не найдены отсутствующие LoRAs",
|
||||
"noLoraRootConfigured": "Корневой каталог LoRA не настроен. Пожалуйста, установите корневой каталог LoRA по умолчанию в настройках."
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "Модели не выбраны",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "Не удалось сохранить сопоставления базовых моделей: {message}",
|
||||
"downloadTemplatesUpdated": "Шаблоны путей загрузки обновлены",
|
||||
"downloadTemplatesFailed": "Не удалось сохранить шаблоны путей загрузки: {message}",
|
||||
"recipesPathUpdated": "Путь хранения рецептов обновлён",
|
||||
"recipesPathSaveFailed": "Не удалось обновить путь хранения рецептов: {message}",
|
||||
"settingsUpdated": "Настройки обновлены: {setting}",
|
||||
"compactModeToggled": "Компактный режим {state}",
|
||||
"settingSaveFailed": "Не удалось сохранить настройку: {message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "Не удалось загрузить обученные слова",
|
||||
"tooLong": "Триггерное слово не должно превышать 100 слов",
|
||||
"tooMany": "Максимум 30 триггерных слов разрешено",
|
||||
"tooLong": "Триггерное слово не должно превышать 500 слов",
|
||||
"tooMany": "Максимум 100 триггерных слов разрешено",
|
||||
"alreadyExists": "Это триггерное слово уже существует",
|
||||
"updateSuccess": "Триггерные слова успешно обновлены",
|
||||
"updateFailed": "Не удалось обновить триггерные слова",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "Не удалось удалить {type}: {message}",
|
||||
"excludeSuccess": "{type} успешно исключен",
|
||||
"excludeFailed": "Не удалось исключить {type}: {message}",
|
||||
"restoreSuccess": "{type} успешно восстановлен",
|
||||
"restoreFailed": "Не удалось восстановить {type}: {message}",
|
||||
"fileNameUpdated": "Имя файла успешно обновлено",
|
||||
"fileRenameFailed": "Не удалось переименовать файл: {error}",
|
||||
"previewUpdated": "Превью успешно обновлено",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "Системная диагностика",
|
||||
"title": "Доктор",
|
||||
"buttonTitle": "Запустить диагностику и обычные исправления",
|
||||
"loading": "Проверка окружения...",
|
||||
"footer": "Экспортируйте диагностический пакет, если проблема сохраняется после исправления.",
|
||||
"summary": {
|
||||
"idle": "Выполнить проверку настроек, целостности кэша и согласованности интерфейса.",
|
||||
"ok": "В текущем окружении активных проблем не обнаружено.",
|
||||
"warning": "Обнаружено {count} проблем(ы). Большинство можно исправить прямо из этой панели.",
|
||||
"error": "Перед тем как приложение станет полностью исправным, нужно устранить {count} проблем(ы)."
|
||||
},
|
||||
"status": {
|
||||
"ok": "Исправно",
|
||||
"warning": "Требует внимания",
|
||||
"error": "Требуется действие"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "Запустить снова",
|
||||
"exportBundle": "Экспортировать пакет"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "Не удалось загрузить диагностику: {message}",
|
||||
"repairSuccess": "Перестройка кэша завершена.",
|
||||
"repairFailed": "Не удалось перестроить кэш: {message}",
|
||||
"exportSuccess": "Диагностический пакет экспортирован.",
|
||||
"exportFailed": "Не удалось экспортировать диагностический пакет: {message}",
|
||||
"conflictsResolved": "Разрешено конфликтов имён файлов: {count}.",
|
||||
"conflictsResolveFailed": "Не удалось разрешить конфликты имён файлов: {message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "Обнаружено обновление приложения",
|
||||
|
||||
@@ -15,7 +15,8 @@
|
||||
"settings": "设置",
|
||||
"help": "帮助",
|
||||
"add": "添加",
|
||||
"close": "关闭"
|
||||
"close": "关闭",
|
||||
"menu": "菜单"
|
||||
},
|
||||
"status": {
|
||||
"loading": "加载中...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "成功修复了 {count} 个配方。",
|
||||
"cancelled": "修复已取消。已修复 {count} 个配方。",
|
||||
"error": "配方修复失败:{message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "管理已排除的模型"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "预设 \"{name}\" 已存在。是否覆盖?",
|
||||
"presetNamePlaceholder": "预设名称...",
|
||||
"baseModel": "基础模型",
|
||||
"baseModelSearchPlaceholder": "搜索基础模型...",
|
||||
"modelTags": "标签(前20)",
|
||||
"modelTypes": "模型类型",
|
||||
"license": "许可证",
|
||||
"noCreditRequired": "无需署名",
|
||||
"allowSellingGeneratedContent": "允许销售",
|
||||
"noTags": "无标签",
|
||||
"noBaseModelMatches": "没有基础模型符合当前搜索。",
|
||||
"clearAll": "清除所有筛选",
|
||||
"any": "任一",
|
||||
"all": "全部",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai API 密钥",
|
||||
"civitaiApiKeyPlaceholder": "请输入你的 Civitai API 密钥",
|
||||
"civitaiApiKeyHelp": "用于从 Civitai 下载模型时的身份验证",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai 站点",
|
||||
"help": "选择使用“在 Civitai 中查看”时默认打开的 Civitai 站点。",
|
||||
"options": {
|
||||
"com": "civitai.com(仅 SFW)",
|
||||
"red": "civitai.red(无限制)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "下载后端",
|
||||
"help": "选择模型文件的下载方式。Python 使用内置下载器。aria2 使用实验性的外部下载进程。",
|
||||
"options": {
|
||||
"python": "Python(内置)",
|
||||
"aria2": "aria2(实验性)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c 路径",
|
||||
"help": "可选的 aria2c 可执行文件路径。留空则使用系统 PATH 中的 aria2c。",
|
||||
"placeholder": "留空则使用 PATH 中的 aria2c"
|
||||
},
|
||||
"aria2HelpLink": "了解如何配置 aria2 下载后端",
|
||||
"civitaiHostBanner": {
|
||||
"title": "已提供 Civitai 站点偏好设置",
|
||||
"content": "Civitai 现在使用 civitai.com 提供 SFW 内容,使用 civitai.red 提供无限制内容。你可以在设置中更改默认打开的站点。",
|
||||
"openSettings": "打开设置"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "打开设置文件夹",
|
||||
"tooltip": "打开包含 settings.json 的文件夹",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "内容过滤",
|
||||
"downloads": "下载",
|
||||
"videoSettings": "视频设置",
|
||||
"layoutSettings": "布局设置",
|
||||
"misc": "其他",
|
||||
"backup": "备份",
|
||||
"folderSettings": "默认根目录",
|
||||
"recipeSettings": "配方",
|
||||
"extraFolderPaths": "额外文件夹路径",
|
||||
"downloadPathTemplates": "下载路径模板",
|
||||
"priorityTags": "优先标签",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "模糊 NSFW 内容",
|
||||
"blurNsfwContentHelp": "模糊成熟(NSFW)内容预览图片",
|
||||
"showOnlySfw": "仅显示 SFW 结果",
|
||||
"showOnlySfwHelp": "浏览和搜索时过滤所有 NSFW 内容"
|
||||
"showOnlySfwHelp": "浏览和搜索时过滤所有 NSFW 内容",
|
||||
"matureBlurThreshold": "成人内容模糊阈值",
|
||||
"matureBlurThresholdHelp": "设置当启用 NSFW 模糊时,从哪个评级级别开始模糊过滤。",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 及以上",
|
||||
"r": "R 及以上(默认)",
|
||||
"x": "X 及以上",
|
||||
"xxx": "仅 XXX"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "悬停时自动播放视频",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "无法保存跳过路径:{message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "自动备份",
|
||||
"autoEnabledHelp": "每天创建一次本地快照,并按保留策略保留最新快照。",
|
||||
"retention": "保留数量",
|
||||
"retentionHelp": "在删除旧快照之前,要保留多少个自动快照。",
|
||||
"management": "备份管理",
|
||||
"managementHelp": "导出当前用户状态,或从备份归档中恢复。",
|
||||
"scopeHelp": "备份你的设置、下载历史和模型更新状态。不包含模型文件或可重建的缓存。",
|
||||
"locationSummary": "当前备份位置",
|
||||
"openFolderButton": "打开备份文件夹",
|
||||
"openFolderSuccess": "已打开备份文件夹",
|
||||
"openFolderFailed": "无法打开备份文件夹",
|
||||
"locationCopied": "备份路径已复制到剪贴板:{{path}}",
|
||||
"locationClipboardFallback": "备份路径:{{path}}",
|
||||
"exportButton": "导出备份",
|
||||
"exportSuccess": "备份导出成功。",
|
||||
"exportFailed": "备份导出失败:{message}",
|
||||
"importButton": "导入备份",
|
||||
"importConfirm": "导入此备份并覆盖本地用户状态吗?",
|
||||
"importSuccess": "备份导入成功。",
|
||||
"importFailed": "备份导入失败:{message}",
|
||||
"latestSnapshot": "最近快照",
|
||||
"latestAutoSnapshot": "最近自动快照",
|
||||
"snapshotCount": "已保存快照",
|
||||
"noneAvailable": "还没有快照"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "跳过这些基础模型的下载",
|
||||
"help": "适用于所有下载流程。这里只能选择受支持的基础模型。",
|
||||
"searchPlaceholder": "筛选基础模型...",
|
||||
"empty": "没有与当前搜索匹配的基础模型。",
|
||||
"summary": {
|
||||
"none": "未选择",
|
||||
"count": "已选择 {count} 项"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "编辑",
|
||||
"collapse": "收起",
|
||||
"clear": "清空"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "无法保存已排除的基础模型:{message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "跳过已下载的模型版本",
|
||||
"help": "启用后,如果下载历史服务记录显示该版本已下载,LoRA Manager 将跳过下载该模型版本。适用于所有下载流程。"
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "显示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "悬停时显示"
|
||||
},
|
||||
"cardInfoDisplayHelp": "选择何时显示模型信息和操作按钮",
|
||||
"showVersionOnCard": "在卡片上显示版本",
|
||||
"showVersionOnCardHelp": "在模型卡片上显示或隐藏版本名称",
|
||||
"modelCardFooterAction": "模型卡片按钮操作",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "打开示例图片",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "设置下载、导入和移动时的默认 Diffusion Model (UNET) 根目录",
|
||||
"defaultEmbeddingRoot": "Embedding 根目录",
|
||||
"defaultEmbeddingRootHelp": "设置下载、导入和移动时的默认 Embedding 根目录",
|
||||
"recipesPath": "配方存储路径",
|
||||
"recipesPathHelp": "已保存配方的可选自定义目录。留空则使用第一个 LoRA 根目录下的 recipes 文件夹。",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "正在迁移配方存储...",
|
||||
"noDefault": "无默认"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "额外文件夹路径",
|
||||
"help": "在 ComfyUI 的标准路径之外添加额外的模型文件夹。这些路径单独存储,并与默认文件夹一起扫描。",
|
||||
"description": "配置额外的文件夹以扫描模型。这些路径是 LoRA Manager 特有的,将与 ComfyUI 的默认路径合并。",
|
||||
"description": "LoRA Manager 专属的额外模型根目录。从 ComfyUI 标准文件夹之外的位置加载模型,特别适合管理大型模型库,避免影响 ComfyUI 性能。",
|
||||
"restartRequired": "需要重启才能生效",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA 路径",
|
||||
"checkpoint": "Checkpoint 路径",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embedding 路径"
|
||||
},
|
||||
"pathPlaceholder": "/额外/模型/路径",
|
||||
"saveSuccess": "额外文件夹路径已更新。",
|
||||
"saveSuccess": "额外文件夹路径已更新,需要重启才能生效。",
|
||||
"saveError": "更新额外文件夹路径失败:{message}",
|
||||
"validation": {
|
||||
"duplicatePath": "此路径已配置"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "输入保存从 Civitai 下载的示例图片的文件夹路径",
|
||||
"autoDownload": "自动下载示例图片",
|
||||
"autoDownloadHelp": "自动为没有示例图片的模型下载示例图片(需设置下载位置)",
|
||||
"openMode": "打开示例图片操作",
|
||||
"openModeHelp": "选择是在服务器上打开、复制映射后的本地路径,还是启动自定义 URI。",
|
||||
"openModeOptions": {
|
||||
"system": "在服务器上打开",
|
||||
"clipboard": "复制本地路径",
|
||||
"uriTemplate": "打开自定义 URI"
|
||||
},
|
||||
"localRoot": "本地示例图片根目录",
|
||||
"localRootHelp": "可选的本地或挂载根目录,用于映射服务器上的示例图片目录。若留空,则复用服务器路径。",
|
||||
"localRootPlaceholder": "例如:/Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "打开 URI 模板",
|
||||
"uriTemplateHelp": "使用自定义深链接,例如文件 URI 或 Shortcuts 链接。",
|
||||
"uriTemplatePlaceholder": "例如:shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "可用占位符:{{local_path}}、{{encoded_local_path}}、{{relative_path}}、{{encoded_relative_path}}、{{file_uri}}、{{encoded_file_uri}}",
|
||||
"openModeWikiLink": "了解远程打开模式",
|
||||
"optimizeImages": "优化下载图片",
|
||||
"optimizeImagesHelp": "优化示例图片以减少文件大小并提升加载速度(保留元数据)",
|
||||
"download": "下载",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "自动整理所选模型",
|
||||
"skipMetadataRefresh": "跳过所选模型的元数据刷新",
|
||||
"resumeMetadataRefresh": "恢复所选模型的元数据刷新",
|
||||
"deleteAll": "删除选中模型",
|
||||
"deleteAll": "删除已选",
|
||||
"downloadMissingLoras": "下载缺失的 LoRAs",
|
||||
"clear": "清除选择",
|
||||
"skipMetadataRefreshCount": "跳过({count} 个模型)",
|
||||
"resumeMetadataRefreshCount": "恢复({count} 个模型)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "移动到文件夹",
|
||||
"repairMetadata": "修复元数据",
|
||||
"excludeModel": "排除模型",
|
||||
"restoreModel": "恢复模型",
|
||||
"deleteModel": "删除模型",
|
||||
"shareRecipe": "分享配方",
|
||||
"viewAllLoras": "查看所有 LoRA",
|
||||
@@ -622,9 +737,9 @@
|
||||
"title": "从图片或 URL 导入配方",
|
||||
"urlLocalPath": "URL / 本地路径",
|
||||
"uploadImage": "上传图片",
|
||||
"urlSectionDescription": "输入 Civitai 图片 URL 或本地文件路径以导入为配方。",
|
||||
"urlSectionDescription": "输入来自 civitai.com 或 civitai.red 的 Civitai 图片 URL,或本地文件路径以导入为配方。",
|
||||
"imageUrlOrPath": "图片 URL 或文件路径:",
|
||||
"urlPlaceholder": "https://civitai.com/images/... 或 C:/path/to/image.png",
|
||||
"urlPlaceholder": "https://civitai.com/images/... 或 https://civitai.red/images/... 或 C:/path/to/image.png",
|
||||
"fetchImage": "获取图片",
|
||||
"uploadSectionDescription": "上传带有 LoRA 元数据的图片以导入为配方。",
|
||||
"selectImage": "选择图片",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "移动到 {otherType} 文件夹"
|
||||
"moveToOtherTypeFolder": "移动到 {otherType} 文件夹",
|
||||
"sendToWorkflow": "发送到工作流"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "取消固定侧边栏",
|
||||
"switchToListView": "切换到列表视图",
|
||||
"switchToTreeView": "切换到树状视图",
|
||||
"recursiveOn": "搜索子文件夹",
|
||||
"recursiveOff": "仅搜索当前文件夹",
|
||||
"recursiveOn": "包含子文件夹",
|
||||
"recursiveOff": "仅当前文件夹",
|
||||
"recursiveUnavailable": "仅在树形视图中可使用递归搜索",
|
||||
"collapseAllDisabled": "列表视图下不可用",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "早期访问",
|
||||
"earlyAccessTooltip": "需要早期访问权限",
|
||||
"inLibrary": "已在库中",
|
||||
"downloaded": "已下载",
|
||||
"downloadedTooltip": "之前已下载,但当前不在你的库中。",
|
||||
"alreadyInLibrary": "已存在于库中",
|
||||
"autoOrganizedPath": "【已按路径模板自动整理】",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "更新基础模型",
|
||||
"cancel": "取消"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "下载缺失的 LoRAs",
|
||||
"message": "发现 {uniqueCount} 个独特的缺失 LoRAs(从选定配方中的 {totalCount} 个总数)。",
|
||||
"previewTitle": "要下载的 LoRAs:",
|
||||
"moreItems": "...还有 {count} 个",
|
||||
"note": "文件将使用默认路径模板下载。根据 LoRAs 的数量,这可能需要一些时间。",
|
||||
"downloadButton": "下载 {count} 个 LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "本地示例图片",
|
||||
"message": "未找到此模型的本地示例图片。可选操作:",
|
||||
@@ -1016,9 +1142,9 @@
|
||||
},
|
||||
"proceedText": "仅在你确定需要此操作时继续。",
|
||||
"urlLabel": "Civitai 模型 URL:",
|
||||
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
|
||||
"urlPlaceholder": "https://civitai.com/models/649516/model-name?modelVersionId=726676 或 https://civitai.red/models/649516/model-name?modelVersionId=726676",
|
||||
"helpText": {
|
||||
"title": "粘贴任意 Civitai 模型 URL。支持格式:",
|
||||
"title": "粘贴任意来自 civitai.com 或 civitai.red 的 Civitai 模型 URL。支持格式:",
|
||||
"format1": "https://civitai.com/models/649516",
|
||||
"format2": "https://civitai.com/models/649516?modelVersionId=726676",
|
||||
"format3": "https://civitai.com/models/649516/model-name?modelVersionId=726676",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "在 Civitai 查看",
|
||||
"viewOnCivitaiText": "在 Civitai 查看",
|
||||
"viewCreatorProfile": "查看创作者主页",
|
||||
"openFileLocation": "打开文件位置"
|
||||
"openFileLocation": "打开文件位置",
|
||||
"sendToWorkflow": "发送到 ComfyUI",
|
||||
"sendToWorkflowText": "发送到 ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "文件位置已成功打开",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "路径已复制到剪贴板:{{path}}",
|
||||
"clipboardFallback": "路径:{{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "无法发送到 ComfyUI:没有可用的文件路径"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "版本",
|
||||
"fileName": "文件名",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "取消编辑",
|
||||
"save": "保存更改",
|
||||
"addPlaceholder": "输入或点击下方建议添加",
|
||||
"editWord": "编辑触发词",
|
||||
"editPlaceholder": "编辑触发词",
|
||||
"copyWord": "复制触发词",
|
||||
"deleteWord": "删除触发词",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "{count}天后"
|
||||
},
|
||||
"badges": {
|
||||
"current": "当前版本",
|
||||
"current": "已打开版本",
|
||||
"currentTooltip": "这是你用来打开此弹窗的版本",
|
||||
"inLibrary": "已在库中",
|
||||
"inLibraryTooltip": "此版本已存在于你的本地库中",
|
||||
"downloaded": "已下载",
|
||||
"downloadedTooltip": "此版本之前下载过,但当前不在你的本地库中",
|
||||
"newer": "较新的版本",
|
||||
"newerTooltip": "此版本比你本地的最新版本更新",
|
||||
"earlyAccess": "抢先体验",
|
||||
"ignored": "已忽略"
|
||||
"earlyAccessTooltip": "此版本当前需要 Civitai 抢先体验权限",
|
||||
"ignored": "已忽略",
|
||||
"ignoredTooltip": "此版本已关闭更新通知",
|
||||
"onSiteOnly": "仅站内生成",
|
||||
"onSiteOnlyTooltip": "此版本仅在 Civitai 站内可用,无法下载"
|
||||
},
|
||||
"actions": {
|
||||
"download": "下载",
|
||||
"downloadTooltip": "下载此版本",
|
||||
"downloadEarlyAccessTooltip": "从 Civitai 下载此抢先体验版本",
|
||||
"downloadNotAllowedTooltip": "此版本仅在 Civitai 站内可用,无法下载",
|
||||
"delete": "删除",
|
||||
"deleteTooltip": "删除此本地版本",
|
||||
"ignore": "忽略",
|
||||
"unignore": "取消忽略",
|
||||
"ignoreTooltip": "忽略此版本的更新通知",
|
||||
"unignoreTooltip": "恢复此版本的更新通知",
|
||||
"viewVersionOnCivitai": "在 Civitai 上查看版本",
|
||||
"earlyAccessTooltip": "需要购买抢先体验",
|
||||
"resumeModelUpdates": "继续跟踪该模型的更新",
|
||||
"ignoreModelUpdates": "忽略该模型的更新",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "配方已替换到工作流",
|
||||
"recipeFailedToSend": "发送配方到工作流失败",
|
||||
"noMatchingNodes": "当前工作流中没有兼容的节点",
|
||||
"noTargetNodeSelected": "未选择目标节点"
|
||||
"noTargetNodeSelected": "未选择目标节点",
|
||||
"modelUpdated": "模型已更新到工作流",
|
||||
"modelFailed": "更新模型节点失败"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "配方",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "示例图片文件夹已打开",
|
||||
"openingFolder": "正在打开示例图片文件夹",
|
||||
"failedToOpen": "打开示例图片文件夹失败",
|
||||
"copiedPath": "路径已复制到剪贴板:{{path}}",
|
||||
"clipboardFallback": "路径:{{path}}",
|
||||
"copiedUri": "链接已复制到剪贴板:{{uri}}",
|
||||
"uriClipboardFallback": "链接:{{uri}}",
|
||||
"setupRequired": "示例图片存储",
|
||||
"setupDescription": "要添加自定义示例图片,您需要先设置下载位置。",
|
||||
"setupUsage": "此路径用于存储下载的示例图片和自定义图片。",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "请选择版本",
|
||||
"versionExists": "该版本已存在于你的库中",
|
||||
"downloadCompleted": "下载成功完成",
|
||||
"downloadSkippedByBaseModel": "由于基础模型 {baseModel} 已被排除,已跳过下载",
|
||||
"autoOrganizeSuccess": "自动整理已成功完成,共 {count} 个 {type}",
|
||||
"autoOrganizePartialSuccess": "自动整理完成:已移动 {success} 个,{failures} 个失败,共 {total} 个模型",
|
||||
"autoOrganizeFailed": "自动整理失败:{error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "配方名称更新成功",
|
||||
"tagsUpdated": "配方标签更新成功",
|
||||
"sourceUrlUpdated": "来源 URL 更新成功",
|
||||
"promptUpdated": "提示词更新成功",
|
||||
"negativePromptUpdated": "负面提示词更新成功",
|
||||
"promptEditorHint": "按 Enter 保存,Shift+Enter 换行",
|
||||
"noRecipeId": "无配方 ID",
|
||||
"sendToWorkflowFailed": "发送配方到工作流失败:{message}",
|
||||
"copyFailed": "复制配方语法出错:{message}",
|
||||
"noMissingLoras": "没有缺失的 LoRA 可下载",
|
||||
"missingLorasInfoFailed": "获取缺失 LoRA 信息失败",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "请输入至少一个 URL 或文件路径",
|
||||
"batchImportNoDirectory": "请输入目录路径",
|
||||
"batchImportBrowseFailed": "浏览目录失败:{message}",
|
||||
"batchImportDirectorySelected": "已选择目录:{path}"
|
||||
"batchImportDirectorySelected": "已选择目录:{path}",
|
||||
"noRecipesSelected": "未选择任何配方",
|
||||
"noMissingLorasInSelection": "在选定的配方中未找到缺失的 LoRAs",
|
||||
"noLoraRootConfigured": "未配置 LoRA 根目录。请在设置中设置默认的 LoRA 根目录。"
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "未选中模型",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "保存基础模型映射失败:{message}",
|
||||
"downloadTemplatesUpdated": "下载路径模板已更新",
|
||||
"downloadTemplatesFailed": "保存下载路径模板失败:{message}",
|
||||
"recipesPathUpdated": "配方存储路径已更新",
|
||||
"recipesPathSaveFailed": "更新配方存储路径失败:{message}",
|
||||
"settingsUpdated": "设置已更新:{setting}",
|
||||
"compactModeToggled": "紧凑模式 {state}",
|
||||
"settingSaveFailed": "保存设置失败:{message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "无法加载训练词",
|
||||
"tooLong": "触发词不能超过100个词",
|
||||
"tooMany": "最多允许30个触发词",
|
||||
"tooLong": "触发词不能超过500个词",
|
||||
"tooMany": "最多允许100个触发词",
|
||||
"alreadyExists": "该触发词已存在",
|
||||
"updateSuccess": "触发词更新成功",
|
||||
"updateFailed": "触发词更新失败",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "删除 {type} 失败:{message}",
|
||||
"excludeSuccess": "{type} 排除成功",
|
||||
"excludeFailed": "排除 {type} 失败:{message}",
|
||||
"restoreSuccess": "{type} 已成功恢复",
|
||||
"restoreFailed": "恢复 {type} 失败:{message}",
|
||||
"fileNameUpdated": "文件名更新成功",
|
||||
"fileRenameFailed": "重命名文件失败:{error}",
|
||||
"previewUpdated": "预览图片更新成功",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "系统诊断",
|
||||
"title": "医生",
|
||||
"buttonTitle": "运行诊断并尝试修复常见问题",
|
||||
"loading": "正在检查当前环境...",
|
||||
"footer": "如果修复后问题仍然存在,可以导出诊断包进一步排查。",
|
||||
"summary": {
|
||||
"idle": "检查设置、缓存健康状况和前后端 UI 版本是否一致。",
|
||||
"ok": "当前环境未发现活动问题。",
|
||||
"warning": "发现 {count} 个问题,大多数可以直接在这里处理。",
|
||||
"error": "发现 {count} 个需要尽快处理的问题。"
|
||||
},
|
||||
"status": {
|
||||
"ok": "健康",
|
||||
"warning": "需要关注",
|
||||
"error": "需要处理"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "重新检查",
|
||||
"exportBundle": "导出诊断包"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "加载诊断结果失败:{message}",
|
||||
"repairSuccess": "缓存重建完成。",
|
||||
"repairFailed": "缓存重建失败:{message}",
|
||||
"exportSuccess": "诊断包已导出。",
|
||||
"exportFailed": "导出诊断包失败:{message}",
|
||||
"conflictsResolved": "已解决 {count} 个文件名冲突。",
|
||||
"conflictsResolveFailed": "解决文件名冲突失败:{message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "检测到应用更新",
|
||||
|
||||
@@ -15,7 +15,8 @@
|
||||
"settings": "設定",
|
||||
"help": "說明",
|
||||
"add": "新增",
|
||||
"close": "關閉"
|
||||
"close": "關閉",
|
||||
"menu": "選單"
|
||||
},
|
||||
"status": {
|
||||
"loading": "載入中...",
|
||||
@@ -175,6 +176,9 @@
|
||||
"success": "成功修復 {count} 個配方。",
|
||||
"cancelled": "修復已取消。已修復 {count} 個配方。",
|
||||
"error": "配方修復失敗:{message}"
|
||||
},
|
||||
"manageExcludedModels": {
|
||||
"label": "管理已排除的模型"
|
||||
}
|
||||
},
|
||||
"header": {
|
||||
@@ -222,12 +226,14 @@
|
||||
"presetOverwriteConfirm": "預設 \"{name}\" 已存在。是否覆蓋?",
|
||||
"presetNamePlaceholder": "預設名稱...",
|
||||
"baseModel": "基礎模型",
|
||||
"baseModelSearchPlaceholder": "搜尋基礎模型...",
|
||||
"modelTags": "標籤(前 20)",
|
||||
"modelTypes": "模型類型",
|
||||
"license": "授權",
|
||||
"noCreditRequired": "無需署名",
|
||||
"allowSellingGeneratedContent": "允許銷售",
|
||||
"noTags": "無標籤",
|
||||
"noBaseModelMatches": "沒有基礎模型符合目前的搜尋。",
|
||||
"clearAll": "清除所有篩選",
|
||||
"any": "任一",
|
||||
"all": "全部",
|
||||
@@ -250,6 +256,33 @@
|
||||
"civitaiApiKey": "Civitai API 金鑰",
|
||||
"civitaiApiKeyPlaceholder": "請輸入您的 Civitai API 金鑰",
|
||||
"civitaiApiKeyHelp": "用於從 Civitai 下載模型時的身份驗證",
|
||||
"civitaiHost": {
|
||||
"label": "Civitai 站點",
|
||||
"help": "選擇使用「在 Civitai 中查看」時預設開啟的 Civitai 站點。",
|
||||
"options": {
|
||||
"com": "civitai.com(僅 SFW)",
|
||||
"red": "civitai.red(無限制)"
|
||||
}
|
||||
},
|
||||
"downloadBackend": {
|
||||
"label": "下載後端",
|
||||
"help": "選擇模型檔案的下載方式。Python 使用內建下載器。aria2 使用實驗性的外部下載程序。",
|
||||
"options": {
|
||||
"python": "Python(內建)",
|
||||
"aria2": "aria2(實驗性)"
|
||||
}
|
||||
},
|
||||
"aria2cPath": {
|
||||
"label": "aria2c 路徑",
|
||||
"help": "可選的 aria2c 可執行檔路徑。留空則使用系統 PATH 中的 aria2c。",
|
||||
"placeholder": "留空則使用 PATH 中的 aria2c"
|
||||
},
|
||||
"aria2HelpLink": "了解如何設定 aria2 下載後端",
|
||||
"civitaiHostBanner": {
|
||||
"title": "已提供 Civitai 站點偏好設定",
|
||||
"content": "Civitai 現在使用 civitai.com 提供 SFW 內容,使用 civitai.red 提供無限制內容。你可以在設定中變更預設開啟的站點。",
|
||||
"openSettings": "開啟設定"
|
||||
},
|
||||
"openSettingsFileLocation": {
|
||||
"label": "開啟設定資料夾",
|
||||
"tooltip": "開啟包含 settings.json 的資料夾",
|
||||
@@ -260,10 +293,13 @@
|
||||
},
|
||||
"sections": {
|
||||
"contentFiltering": "內容過濾",
|
||||
"downloads": "下載",
|
||||
"videoSettings": "影片設定",
|
||||
"layoutSettings": "版面設定",
|
||||
"misc": "其他",
|
||||
"backup": "備份",
|
||||
"folderSettings": "預設根目錄",
|
||||
"recipeSettings": "配方",
|
||||
"extraFolderPaths": "額外資料夾路徑",
|
||||
"downloadPathTemplates": "下載路徑範本",
|
||||
"priorityTags": "優先標籤",
|
||||
@@ -291,7 +327,15 @@
|
||||
"blurNsfwContent": "模糊 NSFW 內容",
|
||||
"blurNsfwContentHelp": "模糊成熟(NSFW)內容預覽圖片",
|
||||
"showOnlySfw": "僅顯示 SFW 結果",
|
||||
"showOnlySfwHelp": "瀏覽和搜尋時過濾所有 NSFW 內容"
|
||||
"showOnlySfwHelp": "瀏覽和搜尋時過濾所有 NSFW 內容",
|
||||
"matureBlurThreshold": "成人內容模糊閾值",
|
||||
"matureBlurThresholdHelp": "設定當啟用 NSFW 模糊時,從哪個評級級別開始模糊過濾。",
|
||||
"matureBlurThresholdOptions": {
|
||||
"pg13": "PG13 及以上",
|
||||
"r": "R 及以上(預設)",
|
||||
"x": "X 及以上",
|
||||
"xxx": "僅 XXX"
|
||||
}
|
||||
},
|
||||
"videoSettings": {
|
||||
"autoplayOnHover": "滑鼠懸停自動播放影片",
|
||||
@@ -315,6 +359,54 @@
|
||||
"saveFailed": "無法儲存跳過路徑:{message}"
|
||||
}
|
||||
},
|
||||
"backup": {
|
||||
"autoEnabled": "自動備份",
|
||||
"autoEnabledHelp": "每天建立一次本地快照,並依保留政策保留最新快照。",
|
||||
"retention": "保留數量",
|
||||
"retentionHelp": "在刪除舊快照之前,要保留多少自動快照。",
|
||||
"management": "備份管理",
|
||||
"managementHelp": "匯出目前的使用者狀態,或從備份封存中還原。",
|
||||
"scopeHelp": "備份你的設定、下載歷史與模型更新狀態。不包含模型檔案或可重建的快取。",
|
||||
"locationSummary": "目前備份位置",
|
||||
"openFolderButton": "開啟備份資料夾",
|
||||
"openFolderSuccess": "已開啟備份資料夾",
|
||||
"openFolderFailed": "無法開啟備份資料夾",
|
||||
"locationCopied": "備份路徑已複製到剪貼簿:{{path}}",
|
||||
"locationClipboardFallback": "備份路徑:{{path}}",
|
||||
"exportButton": "匯出備份",
|
||||
"exportSuccess": "備份匯出成功。",
|
||||
"exportFailed": "備份匯出失敗:{message}",
|
||||
"importButton": "匯入備份",
|
||||
"importConfirm": "要匯入此備份並覆寫本機使用者狀態嗎?",
|
||||
"importSuccess": "備份匯入成功。",
|
||||
"importFailed": "備份匯入失敗:{message}",
|
||||
"latestSnapshot": "最近快照",
|
||||
"latestAutoSnapshot": "最近自動快照",
|
||||
"snapshotCount": "已儲存快照",
|
||||
"noneAvailable": "目前還沒有快照"
|
||||
},
|
||||
"downloadSkipBaseModels": {
|
||||
"label": "跳過這些基礎模型的下載",
|
||||
"help": "適用於所有下載流程。這裡只能選擇受支援的基礎模型。",
|
||||
"searchPlaceholder": "篩選基礎模型...",
|
||||
"empty": "沒有符合目前搜尋條件的基礎模型。",
|
||||
"summary": {
|
||||
"none": "未選擇",
|
||||
"count": "已選擇 {count} 項"
|
||||
},
|
||||
"actions": {
|
||||
"edit": "編輯",
|
||||
"collapse": "收起",
|
||||
"clear": "清空"
|
||||
},
|
||||
"validation": {
|
||||
"saveFailed": "無法儲存已排除的基礎模型:{message}"
|
||||
}
|
||||
},
|
||||
"skipPreviouslyDownloadedModelVersions": {
|
||||
"label": "跳過已下載的模型版本",
|
||||
"help": "啟用後,如果下載歷史服務記錄顯示該版本已下載,LoRA Manager 將跳過下載該模型版本。適用於所有下載流程。"
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "顯示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -337,6 +429,8 @@
|
||||
"hover": "滑鼠懸停顯示"
|
||||
},
|
||||
"cardInfoDisplayHelp": "選擇何時顯示模型資訊與操作按鈕",
|
||||
"showVersionOnCard": "在卡片上顯示版本",
|
||||
"showVersionOnCardHelp": "在模型卡片上顯示或隱藏版本名稱",
|
||||
"modelCardFooterAction": "模型卡片按鈕操作",
|
||||
"modelCardFooterActionOptions": {
|
||||
"exampleImages": "開啟範例圖片",
|
||||
@@ -363,12 +457,16 @@
|
||||
"defaultUnetRootHelp": "設定下載、匯入和移動時的預設 Diffusion Model (UNET) 根目錄",
|
||||
"defaultEmbeddingRoot": "Embedding 根目錄",
|
||||
"defaultEmbeddingRootHelp": "設定下載、匯入和移動時的預設 Embedding 根目錄",
|
||||
"recipesPath": "配方儲存路徑",
|
||||
"recipesPathHelp": "已儲存配方的可選自訂目錄。留空則使用第一個 LoRA 根目錄下的 recipes 資料夾。",
|
||||
"recipesPathPlaceholder": "/path/to/recipes",
|
||||
"recipesPathMigrating": "正在遷移配方儲存...",
|
||||
"noDefault": "未設定預設"
|
||||
},
|
||||
"extraFolderPaths": {
|
||||
"title": "額外資料夾路徑",
|
||||
"help": "在 ComfyUI 的標準路徑之外新增額外的模型資料夾。這些路徑單獨儲存,並與預設資料夾一起掃描。",
|
||||
"description": "設定額外的資料夾以掃描模型。這些路徑是 LoRA Manager 特有的,將與 ComfyUI 的預設路徑合併。",
|
||||
"description": "LoRA Manager 專屬的額外模型根目錄。從 ComfyUI 標準資料夾之外的位置載入模型,特別適合管理大型模型庫,避免影響 ComfyUI 效能。",
|
||||
"restartRequired": "Requires restart to take effect",
|
||||
"modelTypes": {
|
||||
"lora": "LoRA 路徑",
|
||||
"checkpoint": "Checkpoint 路徑",
|
||||
@@ -376,7 +474,7 @@
|
||||
"embedding": "Embedding 路徑"
|
||||
},
|
||||
"pathPlaceholder": "/額外/模型/路徑",
|
||||
"saveSuccess": "額外資料夾路徑已更新。",
|
||||
"saveSuccess": "額外資料夾路徑已更新,需要重啟才能生效。",
|
||||
"saveError": "更新額外資料夾路徑失敗:{message}",
|
||||
"validation": {
|
||||
"duplicatePath": "此路徑已設定"
|
||||
@@ -444,6 +542,21 @@
|
||||
"downloadLocationHelp": "輸入從 Civitai 下載範例圖片要儲存的資料夾路徑",
|
||||
"autoDownload": "自動下載範例圖片",
|
||||
"autoDownloadHelp": "自動為沒有範例圖片的模型下載範例圖片(需設定下載位置)",
|
||||
"openMode": "開啟範例圖片動作",
|
||||
"openModeHelp": "選擇是在伺服器上開啟、複製對應的本機路徑,或啟動自訂 URI。",
|
||||
"openModeOptions": {
|
||||
"system": "在伺服器上開啟",
|
||||
"clipboard": "複製本機路徑",
|
||||
"uriTemplate": "開啟自訂 URI"
|
||||
},
|
||||
"localRoot": "本機範例圖片根目錄",
|
||||
"localRootHelp": "可選的本機或掛載根目錄,用於對應伺服器上的範例圖片目錄。若留白,則會重用伺服器路徑。",
|
||||
"localRootPlaceholder": "例如:/Volumes/ComfyUI/example_images",
|
||||
"uriTemplate": "開啟 URI 範本",
|
||||
"uriTemplateHelp": "使用自訂深層連結,例如檔案 URI 或 Shortcuts 連結。",
|
||||
"uriTemplatePlaceholder": "例如:shortcuts://run-shortcut?name=Open%20Finder&input=text&text={{encoded_local_path}}",
|
||||
"uriTemplatePlaceholders": "可用佔位符:{{local_path}}、{{encoded_local_path}}、{{relative_path}}、{{encoded_relative_path}}、{{file_uri}}、{{encoded_file_uri}}",
|
||||
"openModeWikiLink": "了解遠端開啟模式",
|
||||
"optimizeImages": "最佳化下載圖片",
|
||||
"optimizeImagesHelp": "最佳化範例圖片以減少檔案大小並提升載入速度(會保留原有的 metadata)",
|
||||
"download": "下載",
|
||||
@@ -574,7 +687,8 @@
|
||||
"autoOrganize": "自動整理所選模型",
|
||||
"skipMetadataRefresh": "跳過所選模型的元數據更新",
|
||||
"resumeMetadataRefresh": "恢復所選模型的元數據更新",
|
||||
"deleteAll": "刪除全部模型",
|
||||
"deleteAll": "刪除所選",
|
||||
"downloadMissingLoras": "下載缺失的 LoRAs",
|
||||
"clear": "清除選取",
|
||||
"skipMetadataRefreshCount": "跳過({count} 個模型)",
|
||||
"resumeMetadataRefreshCount": "恢復({count} 個模型)",
|
||||
@@ -604,6 +718,7 @@
|
||||
"moveToFolder": "移動到資料夾",
|
||||
"repairMetadata": "修復元數據",
|
||||
"excludeModel": "排除模型",
|
||||
"restoreModel": "還原模型",
|
||||
"deleteModel": "刪除模型",
|
||||
"shareRecipe": "分享配方",
|
||||
"viewAllLoras": "檢視全部 LoRA",
|
||||
@@ -799,7 +914,8 @@
|
||||
"diffusion_model": "Diffusion Model"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "移動到 {otherType} 資料夾"
|
||||
"moveToOtherTypeFolder": "移動到 {otherType} 資料夾",
|
||||
"sendToWorkflow": "傳送到工作流"
|
||||
}
|
||||
},
|
||||
"embeddings": {
|
||||
@@ -812,8 +928,8 @@
|
||||
"unpinSidebar": "取消固定側邊欄",
|
||||
"switchToListView": "切換至列表檢視",
|
||||
"switchToTreeView": "切換到樹狀檢視",
|
||||
"recursiveOn": "搜尋子資料夾",
|
||||
"recursiveOff": "僅搜尋目前資料夾",
|
||||
"recursiveOn": "包含子資料夾",
|
||||
"recursiveOff": "僅目前資料夾",
|
||||
"recursiveUnavailable": "遞迴搜尋僅能在樹狀檢視中使用",
|
||||
"collapseAllDisabled": "列表檢視下不可用",
|
||||
"dragDrop": {
|
||||
@@ -893,6 +1009,8 @@
|
||||
"earlyAccess": "早期存取",
|
||||
"earlyAccessTooltip": "需要早期存取",
|
||||
"inLibrary": "已在庫存",
|
||||
"downloaded": "已下載",
|
||||
"downloadedTooltip": "先前已下載,但目前不在你的庫中。",
|
||||
"alreadyInLibrary": "已在庫存",
|
||||
"autoOrganizedPath": "[依路徑範本自動整理]",
|
||||
"errors": {
|
||||
@@ -983,6 +1101,14 @@
|
||||
"save": "更新基礎模型",
|
||||
"cancel": "取消"
|
||||
},
|
||||
"bulkDownloadMissingLoras": {
|
||||
"title": "下載缺失的 LoRAs",
|
||||
"message": "發現 {uniqueCount} 個獨特的缺失 LoRAs(從選取食譜中的 {totalCount} 個總數)。",
|
||||
"previewTitle": "要下載的 LoRAs:",
|
||||
"moreItems": "...還有 {count} 個",
|
||||
"note": "檔案將使用預設路徑模板下載。根據 LoRAs 的數量,這可能需要一些時間。",
|
||||
"downloadButton": "下載 {count} 個 LoRA(s)"
|
||||
},
|
||||
"exampleAccess": {
|
||||
"title": "本機範例圖片",
|
||||
"message": "此模型未找到本機範例圖片。可選擇:",
|
||||
@@ -1034,7 +1160,9 @@
|
||||
"viewOnCivitai": "在 Civitai 查看",
|
||||
"viewOnCivitaiText": "在 Civitai 查看",
|
||||
"viewCreatorProfile": "查看創作者個人檔案",
|
||||
"openFileLocation": "開啟檔案位置"
|
||||
"openFileLocation": "開啟檔案位置",
|
||||
"sendToWorkflow": "傳送到 ComfyUI",
|
||||
"sendToWorkflowText": "傳送到 ComfyUI"
|
||||
},
|
||||
"openFileLocation": {
|
||||
"success": "檔案位置已成功開啟",
|
||||
@@ -1042,6 +1170,9 @@
|
||||
"copied": "路徑已複製到剪貼簿:{{path}}",
|
||||
"clipboardFallback": "路徑:{{path}}"
|
||||
},
|
||||
"sendToWorkflow": {
|
||||
"noFilePath": "無法傳送到 ComfyUI:沒有可用的檔案路徑"
|
||||
},
|
||||
"metadata": {
|
||||
"version": "版本",
|
||||
"fileName": "檔案名稱",
|
||||
@@ -1078,6 +1209,8 @@
|
||||
"cancel": "取消編輯",
|
||||
"save": "儲存變更",
|
||||
"addPlaceholder": "輸入或點擊下方建議",
|
||||
"editWord": "編輯觸發詞",
|
||||
"editPlaceholder": "編輯觸發詞",
|
||||
"copyWord": "複製觸發詞",
|
||||
"deleteWord": "刪除觸發詞",
|
||||
"suggestions": {
|
||||
@@ -1149,17 +1282,33 @@
|
||||
"days": "{count}天後"
|
||||
},
|
||||
"badges": {
|
||||
"current": "目前版本",
|
||||
"current": "已開啟版本",
|
||||
"currentTooltip": "這是你用來開啟此彈窗的版本",
|
||||
"inLibrary": "已在庫中",
|
||||
"inLibraryTooltip": "此版本已存在於你的本地庫中",
|
||||
"downloaded": "已下載",
|
||||
"downloadedTooltip": "此版本之前下載過,但目前不在你的本地庫中",
|
||||
"newer": "較新版本",
|
||||
"newerTooltip": "此版本比你本地的最新版本更新",
|
||||
"earlyAccess": "搶先體驗",
|
||||
"ignored": "已忽略"
|
||||
"earlyAccessTooltip": "此版本目前需要 Civitai 搶先體驗權限",
|
||||
"ignored": "已忽略",
|
||||
"ignoredTooltip": "此版本已關閉更新通知",
|
||||
"onSiteOnly": "僅站內生成",
|
||||
"onSiteOnlyTooltip": "此版本僅在 Civitai 站內可用,無法下載"
|
||||
},
|
||||
"actions": {
|
||||
"download": "下載",
|
||||
"downloadTooltip": "下載此版本",
|
||||
"downloadEarlyAccessTooltip": "從 Civitai 下載此搶先體驗版本",
|
||||
"downloadNotAllowedTooltip": "此版本僅在 Civitai 站內可用,無法下載",
|
||||
"delete": "刪除",
|
||||
"deleteTooltip": "刪除此本地版本",
|
||||
"ignore": "忽略",
|
||||
"unignore": "取消忽略",
|
||||
"ignoreTooltip": "忽略此版本的更新通知",
|
||||
"unignoreTooltip": "恢復此版本的更新通知",
|
||||
"viewVersionOnCivitai": "在 Civitai 上查看版本",
|
||||
"earlyAccessTooltip": "需要購買搶先體驗",
|
||||
"resumeModelUpdates": "恢復追蹤此模型的更新",
|
||||
"ignoreModelUpdates": "忽略此模型的更新",
|
||||
@@ -1299,7 +1448,9 @@
|
||||
"recipeReplaced": "配方已取代於工作流",
|
||||
"recipeFailedToSend": "傳送配方到工作流失敗",
|
||||
"noMatchingNodes": "目前工作流程中沒有相容的節點",
|
||||
"noTargetNodeSelected": "未選擇目標節點"
|
||||
"noTargetNodeSelected": "未選擇目標節點",
|
||||
"modelUpdated": "模型已更新到工作流",
|
||||
"modelFailed": "更新模型節點失敗"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "配方",
|
||||
@@ -1313,6 +1464,10 @@
|
||||
"opened": "範例圖片資料夾已開啟",
|
||||
"openingFolder": "正在開啟範例圖片資料夾",
|
||||
"failedToOpen": "開啟範例圖片資料夾失敗",
|
||||
"copiedPath": "路徑已複製到剪貼簿:{{path}}",
|
||||
"clipboardFallback": "路徑:{{path}}",
|
||||
"copiedUri": "連結已複製到剪貼簿:{{uri}}",
|
||||
"uriClipboardFallback": "連結:{{uri}}",
|
||||
"setupRequired": "範例圖片儲存",
|
||||
"setupDescription": "要新增自訂範例圖片,您需要先設定下載位置。",
|
||||
"setupUsage": "此路徑用於儲存下載的範例圖片和自訂圖片。",
|
||||
@@ -1450,6 +1605,7 @@
|
||||
"pleaseSelectVersion": "請選擇一個版本",
|
||||
"versionExists": "此版本已存在於您的庫中",
|
||||
"downloadCompleted": "下載成功完成",
|
||||
"downloadSkippedByBaseModel": "由於基礎模型 {baseModel} 已被排除,已跳過下載",
|
||||
"autoOrganizeSuccess": "自動整理已成功完成,共 {count} 個 {type} 已整理",
|
||||
"autoOrganizePartialSuccess": "自動整理完成:已移動 {success} 個,{failures} 個失敗,共 {total} 個模型",
|
||||
"autoOrganizeFailed": "自動整理失敗:{error}",
|
||||
@@ -1469,7 +1625,11 @@
|
||||
"nameUpdated": "配方名稱已更新",
|
||||
"tagsUpdated": "配方標籤已更新",
|
||||
"sourceUrlUpdated": "來源網址已更新",
|
||||
"promptUpdated": "提示詞更新成功",
|
||||
"negativePromptUpdated": "負面提示詞更新成功",
|
||||
"promptEditorHint": "按 Enter 儲存,Shift+Enter 換行",
|
||||
"noRecipeId": "無配方 ID",
|
||||
"sendToWorkflowFailed": "傳送配方到工作流失敗:{message}",
|
||||
"copyFailed": "複製配方語法錯誤:{message}",
|
||||
"noMissingLoras": "無缺少的 LoRA 可下載",
|
||||
"missingLorasInfoFailed": "取得缺少 LoRA 資訊失敗",
|
||||
@@ -1507,7 +1667,10 @@
|
||||
"batchImportNoUrls": "請輸入至少一個 URL 或檔案路徑",
|
||||
"batchImportNoDirectory": "請輸入目錄路徑",
|
||||
"batchImportBrowseFailed": "瀏覽目錄失敗:{message}",
|
||||
"batchImportDirectorySelected": "已選擇目錄:{path}"
|
||||
"batchImportDirectorySelected": "已選擇目錄:{path}",
|
||||
"noRecipesSelected": "未選取任何食譜",
|
||||
"noMissingLorasInSelection": "在選取的食譜中未找到缺失的 LoRAs",
|
||||
"noLoraRootConfigured": "未配置 LoRA 根目錄。請在設定中設定預設的 LoRA 根目錄。"
|
||||
},
|
||||
"models": {
|
||||
"noModelsSelected": "未選擇模型",
|
||||
@@ -1574,6 +1737,8 @@
|
||||
"mappingSaveFailed": "儲存基礎模型對應失敗:{message}",
|
||||
"downloadTemplatesUpdated": "下載路徑範本已更新",
|
||||
"downloadTemplatesFailed": "儲存下載路徑範本失敗:{message}",
|
||||
"recipesPathUpdated": "配方儲存路徑已更新",
|
||||
"recipesPathSaveFailed": "更新配方儲存路徑失敗:{message}",
|
||||
"settingsUpdated": "設定已更新:{setting}",
|
||||
"compactModeToggled": "緊湊模式已{state}",
|
||||
"settingSaveFailed": "儲存設定失敗:{message}",
|
||||
@@ -1624,8 +1789,8 @@
|
||||
},
|
||||
"triggerWords": {
|
||||
"loadFailed": "無法載入訓練詞",
|
||||
"tooLong": "觸發詞不可超過 100 個字",
|
||||
"tooMany": "最多允許 30 個觸發詞",
|
||||
"tooLong": "觸發詞不可超過 500 個字",
|
||||
"tooMany": "最多允許 100 個觸發詞",
|
||||
"alreadyExists": "此觸發詞已存在",
|
||||
"updateSuccess": "觸發詞已更新",
|
||||
"updateFailed": "更新觸發詞失敗",
|
||||
@@ -1686,6 +1851,8 @@
|
||||
"deleteFailed": "刪除 {type} 失敗:{message}",
|
||||
"excludeSuccess": "{type} 已成功排除",
|
||||
"excludeFailed": "排除 {type} 失敗:{message}",
|
||||
"restoreSuccess": "{type} 已成功還原",
|
||||
"restoreFailed": "還原 {type} 失敗:{message}",
|
||||
"fileNameUpdated": "檔案名稱已成功更新",
|
||||
"fileRenameFailed": "重新命名檔案失敗:{error}",
|
||||
"previewUpdated": "預覽圖片已成功更新",
|
||||
@@ -1717,6 +1884,37 @@
|
||||
"moveFailed": "Failed to move item: {message}"
|
||||
}
|
||||
},
|
||||
"doctor": {
|
||||
"kicker": "系統診斷",
|
||||
"title": "醫生",
|
||||
"buttonTitle": "執行診斷與常見修復",
|
||||
"loading": "正在檢查環境...",
|
||||
"footer": "如果修復後問題仍然存在,請匯出診斷套件。",
|
||||
"summary": {
|
||||
"idle": "針對設定、快取完整性與 UI 一致性執行健康檢查。",
|
||||
"ok": "目前環境中未發現任何活動中的問題。",
|
||||
"warning": "找到 {count} 個問題。大多可以直接在此面板修復。",
|
||||
"error": "應先處理 {count} 個問題,應用程式才能完全正常。"
|
||||
},
|
||||
"status": {
|
||||
"ok": "健康",
|
||||
"warning": "需要注意",
|
||||
"error": "需要處理"
|
||||
},
|
||||
"actions": {
|
||||
"runAgain": "重新執行",
|
||||
"exportBundle": "匯出套件"
|
||||
},
|
||||
"toast": {
|
||||
"loadFailed": "載入診斷失敗:{message}",
|
||||
"repairSuccess": "快取重建完成。",
|
||||
"repairFailed": "快取重建失敗:{message}",
|
||||
"exportSuccess": "診斷套件已匯出。",
|
||||
"exportFailed": "匯出診斷套件失敗:{message}",
|
||||
"conflictsResolved": "已解決 {count} 個檔案名稱衝突。",
|
||||
"conflictsResolveFailed": "解決檔案名稱衝突失敗:{message}"
|
||||
}
|
||||
},
|
||||
"banners": {
|
||||
"versionMismatch": {
|
||||
"title": "偵測到應用程式更新",
|
||||
|
||||
269
py/config.py
269
py/config.py
@@ -1,5 +1,6 @@
|
||||
import os
|
||||
import platform
|
||||
import posixpath
|
||||
import threading
|
||||
from pathlib import Path
|
||||
import folder_paths # type: ignore
|
||||
@@ -25,6 +26,67 @@ standalone_mode = (
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _normalize_root_identity(path: str) -> str:
|
||||
"""Normalize a root path for comparisons across slash styles."""
|
||||
|
||||
normalized = posixpath.normpath(path.strip().replace("\\", "/"))
|
||||
if len(normalized) >= 2 and normalized[1] == ":":
|
||||
return normalized.lower()
|
||||
return normalized
|
||||
|
||||
|
||||
def _resolve_valid_default_root(
|
||||
current: str, primary_paths: List[str], allowed_paths: List[str], name: str
|
||||
) -> str:
|
||||
"""Return a valid default root from the current primary/extra path set."""
|
||||
|
||||
valid_paths = [path for path in primary_paths if isinstance(path, str) and path.strip()]
|
||||
fallback_paths: List[str] = []
|
||||
seen: Set[str] = set()
|
||||
for path in allowed_paths:
|
||||
if not isinstance(path, str):
|
||||
continue
|
||||
stripped = path.strip()
|
||||
if not stripped:
|
||||
continue
|
||||
identity = _normalize_root_identity(stripped)
|
||||
if identity in seen:
|
||||
continue
|
||||
seen.add(identity)
|
||||
fallback_paths.append(stripped)
|
||||
|
||||
allowed = {_normalize_root_identity(path) for path in fallback_paths}
|
||||
|
||||
if current and _normalize_root_identity(current) in allowed:
|
||||
return current
|
||||
|
||||
if not valid_paths:
|
||||
if not fallback_paths:
|
||||
return ""
|
||||
if current:
|
||||
logger.info(
|
||||
"Repaired stale %s from '%s' to '%s' because it is not present in primary or extra roots",
|
||||
name,
|
||||
current,
|
||||
fallback_paths[0],
|
||||
)
|
||||
else:
|
||||
logger.info("Auto-setting %s to '%s'", name, fallback_paths[0])
|
||||
return fallback_paths[0]
|
||||
|
||||
if current:
|
||||
logger.info(
|
||||
"Repaired stale %s from '%s' to '%s' because it is not present in primary or extra roots",
|
||||
name,
|
||||
current,
|
||||
valid_paths[0],
|
||||
)
|
||||
else:
|
||||
logger.info("Auto-setting %s to '%s'", name, valid_paths[0])
|
||||
|
||||
return valid_paths[0]
|
||||
|
||||
|
||||
def _normalize_folder_paths_for_comparison(
|
||||
folder_paths: Mapping[str, Iterable[str]],
|
||||
) -> Dict[str, Set[str]]:
|
||||
@@ -109,6 +171,7 @@ class Config:
|
||||
self.extra_checkpoints_roots: List[str] = []
|
||||
self.extra_unet_roots: List[str] = []
|
||||
self.extra_embeddings_roots: List[str] = []
|
||||
self.recipes_path: str = ""
|
||||
# Scan symbolic links during initialization
|
||||
self._initialize_symlink_mappings()
|
||||
|
||||
@@ -197,44 +260,79 @@ class Config:
|
||||
"Failed to rename legacy 'default' library: %s", rename_error
|
||||
)
|
||||
|
||||
default_lora_root = comfy_library.get("default_lora_root", "")
|
||||
if not default_lora_root and len(self.loras_roots) == 1:
|
||||
default_lora_root = self.loras_roots[0]
|
||||
default_lora_root = _resolve_valid_default_root(
|
||||
comfy_library.get("default_lora_root", ""),
|
||||
list(self.loras_roots or []),
|
||||
list(self.loras_roots or [])
|
||||
+ list(comfy_library.get("extra_folder_paths", {}).get("loras", []) or []),
|
||||
"default_lora_root",
|
||||
)
|
||||
|
||||
default_checkpoint_root = comfy_library.get("default_checkpoint_root", "")
|
||||
if (
|
||||
not default_checkpoint_root
|
||||
and self.checkpoints_roots
|
||||
and len(self.checkpoints_roots) == 1
|
||||
):
|
||||
default_checkpoint_root = self.checkpoints_roots[0]
|
||||
default_checkpoint_root = _resolve_valid_default_root(
|
||||
comfy_library.get("default_checkpoint_root", ""),
|
||||
list(self.checkpoints_roots or []),
|
||||
list(self.checkpoints_roots or [])
|
||||
+ list(comfy_library.get("extra_folder_paths", {}).get("checkpoints", []) or []),
|
||||
"default_checkpoint_root",
|
||||
)
|
||||
|
||||
default_embedding_root = comfy_library.get("default_embedding_root", "")
|
||||
if (
|
||||
not default_embedding_root
|
||||
and self.embeddings_roots
|
||||
and len(self.embeddings_roots) == 1
|
||||
):
|
||||
default_embedding_root = self.embeddings_roots[0]
|
||||
default_embedding_root = _resolve_valid_default_root(
|
||||
comfy_library.get("default_embedding_root", ""),
|
||||
list(self.embeddings_roots or []),
|
||||
list(self.embeddings_roots or [])
|
||||
+ list(comfy_library.get("extra_folder_paths", {}).get("embeddings", []) or []),
|
||||
"default_embedding_root",
|
||||
)
|
||||
|
||||
metadata = dict(comfy_library.get("metadata", {}))
|
||||
metadata.setdefault("display_name", "ComfyUI")
|
||||
metadata["source"] = "comfyui"
|
||||
extra_folder_paths = {}
|
||||
if isinstance(comfy_library, Mapping):
|
||||
existing_extra_paths = comfy_library.get("extra_folder_paths", {})
|
||||
if isinstance(existing_extra_paths, Mapping):
|
||||
extra_folder_paths = {
|
||||
key: list(value) if isinstance(value, list) else []
|
||||
for key, value in existing_extra_paths.items()
|
||||
}
|
||||
|
||||
active_library_name = settings_service.get_active_library_name()
|
||||
should_activate = (
|
||||
active_library_name == "comfyui"
|
||||
or self._should_activate_comfy_library(libraries, libraries_changed)
|
||||
)
|
||||
|
||||
settings_service.upsert_library(
|
||||
"comfyui",
|
||||
folder_paths=target_folder_paths,
|
||||
extra_folder_paths=extra_folder_paths,
|
||||
default_lora_root=default_lora_root,
|
||||
default_checkpoint_root=default_checkpoint_root,
|
||||
default_embedding_root=default_embedding_root,
|
||||
metadata=metadata,
|
||||
activate=True,
|
||||
activate=should_activate,
|
||||
)
|
||||
|
||||
logger.info("Updated 'comfyui' library with current folder paths")
|
||||
if should_activate:
|
||||
logger.info("Updated 'comfyui' library with current folder paths")
|
||||
else:
|
||||
logger.info(
|
||||
"Updated 'comfyui' library with current folder paths without activating it"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to save folder paths: {e}")
|
||||
|
||||
def _should_activate_comfy_library(
|
||||
self, libraries: Mapping[str, Any], libraries_changed: bool
|
||||
) -> bool:
|
||||
"""Return whether startup sync should make the ComfyUI library active."""
|
||||
|
||||
if libraries_changed:
|
||||
return True
|
||||
if not libraries:
|
||||
return True
|
||||
return "comfyui" in libraries and len(libraries) == 1
|
||||
|
||||
def _is_link(self, path: str) -> bool:
|
||||
try:
|
||||
if os.path.islink(path):
|
||||
@@ -629,6 +727,8 @@ class Config:
|
||||
preview_roots.update(self._expand_preview_root(root))
|
||||
for root in self.extra_embeddings_roots or []:
|
||||
preview_roots.update(self._expand_preview_root(root))
|
||||
if self.recipes_path:
|
||||
preview_roots.update(self._expand_preview_root(self.recipes_path))
|
||||
|
||||
for target, link in self._path_mappings.items():
|
||||
preview_roots.update(self._expand_preview_root(target))
|
||||
@@ -705,6 +805,122 @@ class Config:
|
||||
|
||||
return unique_paths
|
||||
|
||||
@staticmethod
|
||||
def _normalize_path_for_comparison(
|
||||
path: str, *, resolve_realpath: bool = False
|
||||
) -> str:
|
||||
"""Normalize a path for equality checks across platforms."""
|
||||
candidate = os.path.realpath(path) if resolve_realpath else path
|
||||
return os.path.normcase(os.path.normpath(candidate)).replace(os.sep, "/")
|
||||
|
||||
def _filter_overlapping_extra_lora_paths(
|
||||
self,
|
||||
primary_paths: Iterable[str],
|
||||
extra_paths: Iterable[str],
|
||||
) -> List[str]:
|
||||
"""Drop extra LoRA paths that resolve to the same physical location as primary roots."""
|
||||
|
||||
primary_map = {
|
||||
self._normalize_path_for_comparison(path, resolve_realpath=True): path
|
||||
for path in primary_paths
|
||||
if isinstance(path, str) and path.strip() and os.path.exists(path)
|
||||
}
|
||||
primary_symlink_map = self._collect_first_level_symlink_targets(primary_paths)
|
||||
filtered: List[str] = []
|
||||
|
||||
for original_path in extra_paths:
|
||||
if not isinstance(original_path, str):
|
||||
continue
|
||||
|
||||
stripped = original_path.strip()
|
||||
if not stripped:
|
||||
continue
|
||||
if not os.path.exists(stripped):
|
||||
continue
|
||||
|
||||
real_path = self._normalize_path_for_comparison(
|
||||
stripped,
|
||||
resolve_realpath=True,
|
||||
)
|
||||
normalized_path = os.path.normpath(stripped).replace(os.sep, "/")
|
||||
primary_path = primary_map.get(real_path)
|
||||
if primary_path:
|
||||
# Config loading should stay tolerant of existing invalid state and warn.
|
||||
logger.warning(
|
||||
"Detected the same LoRA folder in both ComfyUI model paths and "
|
||||
"LoRA Manager Extra Folder Paths. This can cause duplicate items or "
|
||||
"other unexpected behavior, and it usually means the path setup is "
|
||||
"not doing what you intended. LoRA Manager will keep the ComfyUI "
|
||||
"path and ignore this Extra Folder Paths entry: '%s'. Please review "
|
||||
"your path settings and remove the duplicate entry.",
|
||||
normalized_path,
|
||||
)
|
||||
continue
|
||||
|
||||
symlink_path = primary_symlink_map.get(real_path)
|
||||
if symlink_path:
|
||||
# Config loading should stay tolerant of existing invalid state and warn.
|
||||
logger.warning(
|
||||
"Detected the same LoRA folder in both ComfyUI model paths and "
|
||||
"LoRA Manager Extra Folder Paths. This can cause duplicate items or "
|
||||
"other unexpected behavior, and it usually means the path setup is "
|
||||
"not doing what you intended. LoRA Manager will keep the ComfyUI "
|
||||
"path and ignore this Extra Folder Paths entry: '%s'. Please review "
|
||||
"your path settings and remove the duplicate entry.",
|
||||
normalized_path,
|
||||
)
|
||||
continue
|
||||
|
||||
filtered.append(stripped)
|
||||
|
||||
return filtered
|
||||
|
||||
def _collect_first_level_symlink_targets(
|
||||
self, roots: Iterable[str]
|
||||
) -> Dict[str, str]:
|
||||
"""Return real-path -> link-path mappings for first-level symlinks under the given roots."""
|
||||
|
||||
targets: Dict[str, str] = {}
|
||||
for root in roots:
|
||||
if not isinstance(root, str):
|
||||
continue
|
||||
stripped_root = root.strip()
|
||||
if not stripped_root or not os.path.isdir(stripped_root):
|
||||
continue
|
||||
|
||||
try:
|
||||
with os.scandir(stripped_root) as iterator:
|
||||
for entry in iterator:
|
||||
try:
|
||||
if not self._entry_is_symlink(entry):
|
||||
continue
|
||||
target_path = os.path.realpath(entry.path)
|
||||
if not os.path.isdir(target_path):
|
||||
continue
|
||||
|
||||
normalized_target = self._normalize_path_for_comparison(
|
||||
target_path,
|
||||
resolve_realpath=True,
|
||||
)
|
||||
normalized_link = os.path.normpath(entry.path).replace(
|
||||
os.sep, "/"
|
||||
)
|
||||
targets.setdefault(normalized_target, normalized_link)
|
||||
except Exception as inner_exc:
|
||||
logger.debug(
|
||||
"Error collecting LoRA symlink target for %s: %s",
|
||||
entry.path,
|
||||
inner_exc,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug(
|
||||
"Error scanning first-level LoRA symlinks in %s: %s",
|
||||
stripped_root,
|
||||
exc,
|
||||
)
|
||||
|
||||
return targets
|
||||
|
||||
def _prepare_checkpoint_paths(
|
||||
self, checkpoint_paths: Iterable[str], unet_paths: Iterable[str]
|
||||
) -> Tuple[List[str], List[str], List[str]]:
|
||||
@@ -772,9 +988,11 @@ class Config:
|
||||
self,
|
||||
folder_paths: Mapping[str, Iterable[str]],
|
||||
extra_folder_paths: Optional[Mapping[str, Iterable[str]]] = None,
|
||||
recipes_path: str = "",
|
||||
) -> None:
|
||||
self._path_mappings.clear()
|
||||
self._preview_root_paths = set()
|
||||
self.recipes_path = recipes_path if isinstance(recipes_path, str) else ""
|
||||
|
||||
lora_paths = folder_paths.get("loras", []) or []
|
||||
checkpoint_paths = folder_paths.get("checkpoints", []) or []
|
||||
@@ -796,7 +1014,11 @@ class Config:
|
||||
extra_unet_paths = extra_paths.get("unet", []) or []
|
||||
extra_embedding_paths = extra_paths.get("embeddings", []) or []
|
||||
|
||||
self.extra_loras_roots = self._prepare_lora_paths(extra_lora_paths)
|
||||
filtered_extra_lora_paths = self._filter_overlapping_extra_lora_paths(
|
||||
self.loras_roots,
|
||||
extra_lora_paths,
|
||||
)
|
||||
self.extra_loras_roots = self._prepare_lora_paths(filtered_extra_lora_paths)
|
||||
(
|
||||
_,
|
||||
self.extra_checkpoints_roots,
|
||||
@@ -1026,7 +1248,12 @@ class Config:
|
||||
if not isinstance(extra_folder_paths, Mapping):
|
||||
extra_folder_paths = None
|
||||
|
||||
self._apply_library_paths(folder_paths, extra_folder_paths)
|
||||
recipes_path = (
|
||||
str(library_config.get("recipes_path", ""))
|
||||
if isinstance(library_config, Mapping)
|
||||
else ""
|
||||
)
|
||||
self._apply_library_paths(folder_paths, extra_folder_paths, recipes_path)
|
||||
|
||||
logger.info(
|
||||
"Applied library settings with %d lora roots (%d extra), %d checkpoint roots (%d extra), and %d embedding roots (%d extra)",
|
||||
|
||||
@@ -222,6 +222,7 @@ class LoraManager:
|
||||
|
||||
# Register DownloadManager with ServiceRegistry
|
||||
await ServiceRegistry.get_download_manager()
|
||||
await ServiceRegistry.get_backup_service()
|
||||
|
||||
from .services.metadata_service import initialize_metadata_providers
|
||||
|
||||
|
||||
@@ -352,50 +352,101 @@ class MetadataProcessor:
|
||||
|
||||
# Check if we have stored conditioning objects for this sampler
|
||||
if sampler_id in metadata.get(PROMPTS, {}) and (
|
||||
"pos_conditioning" in metadata[PROMPTS][sampler_id] or
|
||||
"neg_conditioning" in metadata[PROMPTS][sampler_id]):
|
||||
|
||||
"pos_conditioning" in metadata[PROMPTS][sampler_id] or
|
||||
"neg_conditioning" in metadata[PROMPTS][sampler_id]
|
||||
):
|
||||
pos_conditioning = metadata[PROMPTS][sampler_id].get("pos_conditioning")
|
||||
neg_conditioning = metadata[PROMPTS][sampler_id].get("neg_conditioning")
|
||||
|
||||
# Helper function to recursively find prompt text for a conditioning object
|
||||
def find_prompt_text_for_conditioning(conditioning_obj, is_positive=True):
|
||||
|
||||
def extend_unique(target, values):
|
||||
for value in values:
|
||||
if value and value not in target:
|
||||
target.append(value)
|
||||
|
||||
# Helper function to recursively find prompt texts for a conditioning object.
|
||||
# Transform nodes can map one output conditioning to multiple source conditionings.
|
||||
def find_prompt_texts_for_conditioning(
|
||||
conditioning_obj, is_positive=True, visited=None
|
||||
):
|
||||
if conditioning_obj is None:
|
||||
return ""
|
||||
|
||||
return []
|
||||
|
||||
if visited is None:
|
||||
visited = set()
|
||||
|
||||
conditioning_id = id(conditioning_obj)
|
||||
if conditioning_id in visited:
|
||||
return []
|
||||
visited.add(conditioning_id)
|
||||
|
||||
prompt_texts = []
|
||||
|
||||
# Try to match conditioning objects with those stored by extractors
|
||||
for prompt_node_id, prompt_data in metadata[PROMPTS].items():
|
||||
# For nodes with single conditioning output
|
||||
if "conditioning" in prompt_data:
|
||||
if id(prompt_data["conditioning"]) == id(conditioning_obj):
|
||||
return prompt_data.get("text", "")
|
||||
|
||||
# For nodes with separate pos_conditioning and neg_conditioning outputs (like TSC_EfficientLoader)
|
||||
if is_positive and "positive_encoded" in prompt_data:
|
||||
if id(prompt_data["positive_encoded"]) == id(conditioning_obj):
|
||||
if "positive_text" in prompt_data:
|
||||
return prompt_data["positive_text"]
|
||||
else:
|
||||
orig_conditioning = prompt_data.get("orig_pos_cond", None)
|
||||
if orig_conditioning is not None:
|
||||
# Recursively find the prompt text for the original conditioning
|
||||
return find_prompt_text_for_conditioning(orig_conditioning, is_positive=True)
|
||||
|
||||
if not is_positive and "negative_encoded" in prompt_data:
|
||||
if id(prompt_data["negative_encoded"]) == id(conditioning_obj):
|
||||
if "negative_text" in prompt_data:
|
||||
return prompt_data["negative_text"]
|
||||
else:
|
||||
orig_conditioning = prompt_data.get("orig_neg_cond", None)
|
||||
if orig_conditioning is not None:
|
||||
# Recursively find the prompt text for the original conditioning
|
||||
return find_prompt_text_for_conditioning(orig_conditioning, is_positive=False)
|
||||
|
||||
return ""
|
||||
|
||||
if not isinstance(prompt_data, dict):
|
||||
continue
|
||||
|
||||
# For CLIP text nodes with a single conditioning output.
|
||||
if id(prompt_data.get("conditioning")) == conditioning_id:
|
||||
text = prompt_data.get("text", "")
|
||||
if text:
|
||||
extend_unique(prompt_texts, [text])
|
||||
|
||||
# Generic provenance for passthrough/transform/combine nodes.
|
||||
for source in prompt_data.get("conditioning_sources", []):
|
||||
if id(source.get("output")) != conditioning_id:
|
||||
continue
|
||||
for input_conditioning in source.get("inputs", []):
|
||||
extend_unique(
|
||||
prompt_texts,
|
||||
find_prompt_texts_for_conditioning(
|
||||
input_conditioning, is_positive, visited
|
||||
),
|
||||
)
|
||||
|
||||
# For nodes with separate pos_conditioning and neg_conditioning outputs
|
||||
# like TSC_EfficientLoader and existing ControlNet-style metadata.
|
||||
if (
|
||||
is_positive
|
||||
and id(prompt_data.get("positive_encoded")) == conditioning_id
|
||||
):
|
||||
if prompt_data.get("positive_text"):
|
||||
extend_unique(prompt_texts, [prompt_data["positive_text"]])
|
||||
else:
|
||||
extend_unique(
|
||||
prompt_texts,
|
||||
find_prompt_texts_for_conditioning(
|
||||
prompt_data.get("orig_pos_cond"),
|
||||
is_positive=True,
|
||||
visited=visited,
|
||||
),
|
||||
)
|
||||
|
||||
if (
|
||||
not is_positive
|
||||
and id(prompt_data.get("negative_encoded")) == conditioning_id
|
||||
):
|
||||
if prompt_data.get("negative_text"):
|
||||
extend_unique(prompt_texts, [prompt_data["negative_text"]])
|
||||
else:
|
||||
extend_unique(
|
||||
prompt_texts,
|
||||
find_prompt_texts_for_conditioning(
|
||||
prompt_data.get("orig_neg_cond"),
|
||||
is_positive=False,
|
||||
visited=visited,
|
||||
),
|
||||
)
|
||||
|
||||
return prompt_texts
|
||||
|
||||
# Find prompt texts using the helper function
|
||||
result["prompt"] = find_prompt_text_for_conditioning(pos_conditioning, is_positive=True)
|
||||
result["negative_prompt"] = find_prompt_text_for_conditioning(neg_conditioning, is_positive=False)
|
||||
result["prompt"] = ", ".join(
|
||||
find_prompt_texts_for_conditioning(pos_conditioning, is_positive=True)
|
||||
)
|
||||
result["negative_prompt"] = ", ".join(
|
||||
find_prompt_texts_for_conditioning(neg_conditioning, is_positive=False)
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
@@ -509,8 +560,14 @@ class MetadataProcessor:
|
||||
|
||||
params["loras"] = " ".join(lora_parts)
|
||||
|
||||
# Set default clip_skip value
|
||||
params["clip_skip"] = "1" # Common default
|
||||
# Extract clip_skip from any SAMPLING node that provides it
|
||||
for sampler_info in metadata.get(SAMPLING, {}).values():
|
||||
clip_skip = sampler_info.get("parameters", {}).get("clip_skip")
|
||||
if clip_skip is not None:
|
||||
params["clip_skip"] = clip_skip
|
||||
break
|
||||
if params["clip_skip"] is None:
|
||||
params["clip_skip"] = "1"
|
||||
|
||||
return params
|
||||
|
||||
@@ -595,6 +652,15 @@ class MetadataProcessor:
|
||||
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
|
||||
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
|
||||
else:
|
||||
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
|
||||
# Generic guider nodes often expose separate positive/negative inputs.
|
||||
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "positive", max_depth=10)
|
||||
if not positive_node_id:
|
||||
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
|
||||
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
|
||||
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
|
||||
|
||||
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "negative", max_depth=10)
|
||||
if not negative_node_id:
|
||||
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
|
||||
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
|
||||
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
|
||||
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES, IS_SAMPLER
|
||||
|
||||
@@ -142,6 +144,118 @@ class TSCCheckpointLoaderExtractor(NodeMetadataExtractor):
|
||||
metadata[PROMPTS][node_id]["positive_encoded"] = positive_conditioning
|
||||
metadata[PROMPTS][node_id]["negative_encoded"] = negative_conditioning
|
||||
|
||||
|
||||
class EasyComfyLoaderExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
if "ckpt_name" in inputs:
|
||||
_store_checkpoint_metadata(metadata, node_id, inputs["ckpt_name"])
|
||||
|
||||
# Only extract from optional_lora_stack — skip the single lora_name to
|
||||
# avoid double-counting LoRAs that come through the LORA_STACK path.
|
||||
active_loras = []
|
||||
optional_lora_stack = inputs.get("optional_lora_stack")
|
||||
if optional_lora_stack is not None and isinstance(optional_lora_stack, (list, tuple)):
|
||||
for item in optional_lora_stack:
|
||||
if isinstance(item, (list, tuple)) and len(item) >= 2:
|
||||
lora_path = item[0]
|
||||
model_strength = item[1]
|
||||
lora_name = os.path.splitext(os.path.basename(lora_path))[0]
|
||||
active_loras.append({
|
||||
"name": lora_name,
|
||||
"strength": model_strength
|
||||
})
|
||||
|
||||
if active_loras:
|
||||
metadata[LORAS][node_id] = {
|
||||
"lora_list": active_loras,
|
||||
"node_id": node_id
|
||||
}
|
||||
|
||||
positive_text = inputs.get("positive", "")
|
||||
negative_text = inputs.get("negative", "")
|
||||
|
||||
if positive_text or negative_text:
|
||||
if node_id not in metadata[PROMPTS]:
|
||||
metadata[PROMPTS][node_id] = {"node_id": node_id}
|
||||
metadata[PROMPTS][node_id]["positive_text"] = positive_text
|
||||
metadata[PROMPTS][node_id]["negative_text"] = negative_text
|
||||
|
||||
if "clip_skip" in inputs:
|
||||
clip_skip = inputs["clip_skip"]
|
||||
if node_id not in metadata[SAMPLING]:
|
||||
metadata[SAMPLING][node_id] = {"parameters": {}, "node_id": node_id}
|
||||
metadata[SAMPLING][node_id]["parameters"]["clip_skip"] = clip_skip
|
||||
|
||||
width = inputs.get("empty_latent_width")
|
||||
height = inputs.get("empty_latent_height")
|
||||
if width is not None and height is not None:
|
||||
if SIZE not in metadata:
|
||||
metadata[SIZE] = {}
|
||||
metadata[SIZE][node_id] = {
|
||||
"width": int(width),
|
||||
"height": int(height),
|
||||
"node_id": node_id
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
# outputs: [(pipe_dict, model, vae), ...]
|
||||
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
|
||||
return
|
||||
first_output = outputs[0]
|
||||
if not isinstance(first_output, tuple) or len(first_output) < 1:
|
||||
return
|
||||
pipe = first_output[0]
|
||||
if not isinstance(pipe, dict):
|
||||
return
|
||||
|
||||
positive_conditioning = pipe.get("positive")
|
||||
negative_conditioning = pipe.get("negative")
|
||||
|
||||
if positive_conditioning is not None or negative_conditioning is not None:
|
||||
if node_id not in metadata[PROMPTS]:
|
||||
metadata[PROMPTS][node_id] = {"node_id": node_id}
|
||||
if positive_conditioning is not None:
|
||||
metadata[PROMPTS][node_id]["positive_encoded"] = positive_conditioning
|
||||
if negative_conditioning is not None:
|
||||
metadata[PROMPTS][node_id]["negative_encoded"] = negative_conditioning
|
||||
|
||||
|
||||
class EasyPreSamplingExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
sampling_params = {}
|
||||
for key in ("steps", "cfg", "sampler_name", "scheduler", "denoise", "seed"):
|
||||
if key in inputs:
|
||||
sampling_params[key] = inputs[key]
|
||||
|
||||
metadata[SAMPLING][node_id] = {
|
||||
"parameters": sampling_params,
|
||||
"node_id": node_id,
|
||||
IS_SAMPLER: True
|
||||
}
|
||||
|
||||
|
||||
class EasySeedExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs or "seed" not in inputs:
|
||||
return
|
||||
|
||||
metadata[SAMPLING][node_id] = {
|
||||
"parameters": {"seed": inputs["seed"]},
|
||||
"node_id": node_id,
|
||||
IS_SAMPLER: False
|
||||
}
|
||||
|
||||
|
||||
class CLIPTextEncodeExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
@@ -161,6 +275,251 @@ class CLIPTextEncodeExtractor(NodeMetadataExtractor):
|
||||
conditioning = outputs[0][0]
|
||||
metadata[PROMPTS][node_id]["conditioning"] = conditioning
|
||||
|
||||
|
||||
class MyOriginalWaifuTextExtractor(NodeMetadataExtractor):
|
||||
"""Extractor for ComfyUI-MyOriginalWaifu TextProvider nodes."""
|
||||
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
positive_text = inputs.get("positive", "")
|
||||
negative_text = inputs.get("negative", "")
|
||||
|
||||
if positive_text or negative_text:
|
||||
metadata[PROMPTS][node_id] = {
|
||||
"positive_text": positive_text,
|
||||
"negative_text": negative_text,
|
||||
"node_id": node_id,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
output_tuple = _first_output_tuple(outputs)
|
||||
if not output_tuple or len(output_tuple) < 2:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata["positive_text"] = output_tuple[0]
|
||||
prompt_metadata["negative_text"] = output_tuple[1]
|
||||
|
||||
|
||||
class MyOriginalWaifuClipExtractor(NodeMetadataExtractor):
|
||||
"""Extractor for ComfyUI-MyOriginalWaifu ClipProvider nodes."""
|
||||
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
positive_text = inputs.get("positive", "")
|
||||
negative_text = inputs.get("negative", "")
|
||||
|
||||
if positive_text or negative_text:
|
||||
metadata[PROMPTS][node_id] = {
|
||||
"positive_text": positive_text,
|
||||
"negative_text": negative_text,
|
||||
"node_id": node_id,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
output_tuple = _first_output_tuple(outputs)
|
||||
if not output_tuple or len(output_tuple) < 2:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata["positive_encoded"] = output_tuple[0]
|
||||
prompt_metadata["negative_encoded"] = output_tuple[1]
|
||||
|
||||
|
||||
def _ensure_prompt_metadata(metadata, node_id):
|
||||
if node_id not in metadata[PROMPTS]:
|
||||
metadata[PROMPTS][node_id] = {"node_id": node_id}
|
||||
return metadata[PROMPTS][node_id]
|
||||
|
||||
|
||||
def _first_output_tuple(outputs):
|
||||
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
|
||||
return None
|
||||
first_output = outputs[0]
|
||||
if isinstance(first_output, tuple):
|
||||
return first_output
|
||||
return None
|
||||
|
||||
|
||||
def _record_conditioning_source(
|
||||
metadata, node_id, output_conditioning, input_conditionings
|
||||
):
|
||||
if output_conditioning is None:
|
||||
return
|
||||
|
||||
sources = [
|
||||
conditioning for conditioning in input_conditionings if conditioning is not None
|
||||
]
|
||||
if not sources:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata.setdefault("conditioning_sources", []).append(
|
||||
{
|
||||
"output": output_conditioning,
|
||||
"inputs": sources,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _get_variable_name(inputs):
|
||||
for key in ("key", "name", "variable_name", "tag", "text"):
|
||||
value = inputs.get(key)
|
||||
if isinstance(value, str) and value:
|
||||
return value
|
||||
return None
|
||||
|
||||
|
||||
def _get_node_variable_name(metadata, node_id, inputs):
|
||||
variable_name = _get_variable_name(inputs)
|
||||
if variable_name:
|
||||
return variable_name
|
||||
|
||||
prompt = metadata.get("current_prompt")
|
||||
original_prompt = getattr(prompt, "original_prompt", None)
|
||||
if not original_prompt or node_id not in original_prompt:
|
||||
return None
|
||||
|
||||
node_data = original_prompt[node_id]
|
||||
variable_name = _get_variable_name(node_data.get("inputs", {}))
|
||||
if variable_name:
|
||||
return variable_name
|
||||
|
||||
widgets_values = node_data.get("widgets_values", [])
|
||||
if widgets_values and isinstance(widgets_values[0], str):
|
||||
return widgets_values[0]
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class ControlNetApplyAdvancedExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
if inputs.get("positive") is not None:
|
||||
prompt_metadata["orig_pos_cond"] = inputs["positive"]
|
||||
if inputs.get("negative") is not None:
|
||||
prompt_metadata["orig_neg_cond"] = inputs["negative"]
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
output_tuple = _first_output_tuple(outputs)
|
||||
if not output_tuple:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
positive_input = prompt_metadata.get("orig_pos_cond")
|
||||
negative_input = prompt_metadata.get("orig_neg_cond")
|
||||
|
||||
if len(output_tuple) >= 1:
|
||||
prompt_metadata["positive_encoded"] = output_tuple[0]
|
||||
_record_conditioning_source(
|
||||
metadata, node_id, output_tuple[0], [positive_input]
|
||||
)
|
||||
if len(output_tuple) >= 2:
|
||||
prompt_metadata["negative_encoded"] = output_tuple[1]
|
||||
_record_conditioning_source(
|
||||
metadata, node_id, output_tuple[1], [negative_input]
|
||||
)
|
||||
|
||||
|
||||
class ConditioningCombineExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
input_conditionings = []
|
||||
for input_name in inputs:
|
||||
if (
|
||||
input_name.startswith("conditioning")
|
||||
and inputs[input_name] is not None
|
||||
):
|
||||
input_conditionings.append(inputs[input_name])
|
||||
|
||||
if input_conditionings:
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata["orig_conditionings"] = input_conditionings
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
output_tuple = _first_output_tuple(outputs)
|
||||
if not output_tuple or len(output_tuple) < 1:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
output_conditioning = output_tuple[0]
|
||||
prompt_metadata["conditioning"] = output_conditioning
|
||||
_record_conditioning_source(
|
||||
metadata,
|
||||
node_id,
|
||||
output_conditioning,
|
||||
prompt_metadata.get("orig_conditionings", []),
|
||||
)
|
||||
|
||||
|
||||
class SetNodeExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
variable_name = _get_node_variable_name(metadata, node_id, inputs)
|
||||
conditioning = inputs.get("CONDITIONING")
|
||||
if conditioning is None:
|
||||
conditioning = inputs.get("conditioning")
|
||||
if conditioning is None:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata["conditioning"] = conditioning
|
||||
if variable_name:
|
||||
prompt_metadata["variable_name"] = variable_name
|
||||
metadata[PROMPTS].setdefault("__conditioning_variables__", {})[
|
||||
variable_name
|
||||
] = conditioning
|
||||
|
||||
|
||||
class GetNodeExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
variable_name = _get_node_variable_name(metadata, node_id, inputs or {})
|
||||
if variable_name:
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
prompt_metadata["variable_name"] = variable_name
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
output_tuple = _first_output_tuple(outputs)
|
||||
if not output_tuple or len(output_tuple) < 1:
|
||||
return
|
||||
|
||||
prompt_metadata = _ensure_prompt_metadata(metadata, node_id)
|
||||
output_conditioning = output_tuple[0]
|
||||
prompt_metadata["conditioning"] = output_conditioning
|
||||
|
||||
variable_name = prompt_metadata.get("variable_name")
|
||||
if not variable_name:
|
||||
return
|
||||
|
||||
input_conditioning = metadata[PROMPTS].get("__conditioning_variables__", {}).get(
|
||||
variable_name
|
||||
)
|
||||
_record_conditioning_source(
|
||||
metadata, node_id, output_conditioning, [input_conditioning]
|
||||
)
|
||||
|
||||
# Base Sampler Extractor to reduce code redundancy
|
||||
class BaseSamplerExtractor(NodeMetadataExtractor):
|
||||
"""Base extractor for sampler nodes with common functionality"""
|
||||
@@ -427,6 +786,75 @@ class ImageSizeExtractor(NodeMetadataExtractor):
|
||||
"node_id": node_id
|
||||
}
|
||||
|
||||
class RgthreePowerLoraLoaderExtractor(NodeMetadataExtractor):
|
||||
"""Extract LoRA metadata from rgthree Power Lora Loader.
|
||||
|
||||
The node passes LoRAs as dynamic kwargs: LORA_1, LORA_2, ... each containing
|
||||
{'on': bool, 'lora': filename, 'strength': float, 'strengthTwo': float}.
|
||||
"""
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
active_loras = []
|
||||
for key, value in inputs.items():
|
||||
if not key.upper().startswith('LORA_'):
|
||||
continue
|
||||
if not isinstance(value, dict):
|
||||
continue
|
||||
if not value.get('on') or not value.get('lora'):
|
||||
continue
|
||||
lora_name = os.path.splitext(os.path.basename(value['lora']))[0]
|
||||
active_loras.append({
|
||||
"name": lora_name,
|
||||
"strength": round(float(value.get('strength', 1.0)), 2)
|
||||
})
|
||||
|
||||
if active_loras:
|
||||
metadata[LORAS][node_id] = {
|
||||
"lora_list": active_loras,
|
||||
"node_id": node_id
|
||||
}
|
||||
|
||||
|
||||
class TensorRTLoaderExtractor(NodeMetadataExtractor):
|
||||
"""Extract checkpoint metadata from TensorRT Loader.
|
||||
|
||||
extract() parses the engine filename from 'unet_name' as a best-effort
|
||||
fallback (strips profile suffix after '_$' and counter suffix).
|
||||
|
||||
update() checks if the output MODEL has attachments["source_model"]
|
||||
set by the node (NubeBuster fork) and overrides with the real name.
|
||||
Vanilla TRT doesn't set this — the filename parse stands.
|
||||
"""
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
if not inputs or "unet_name" not in inputs:
|
||||
return
|
||||
unet_name = inputs.get("unet_name")
|
||||
# Strip path and extension, then drop the $_profile suffix
|
||||
model_name = os.path.splitext(os.path.basename(unet_name))[0]
|
||||
if "_$" in model_name:
|
||||
model_name = model_name[:model_name.index("_$")]
|
||||
# Strip counter suffix (e.g. _00001_) left by ComfyUI's save path
|
||||
model_name = re.sub(r'_\d+_?$', '', model_name)
|
||||
_store_checkpoint_metadata(metadata, node_id, model_name)
|
||||
|
||||
@staticmethod
|
||||
def update(node_id, outputs, metadata):
|
||||
if not outputs or not isinstance(outputs, list) or len(outputs) == 0:
|
||||
return
|
||||
first_output = outputs[0]
|
||||
if not isinstance(first_output, tuple) or len(first_output) < 1:
|
||||
return
|
||||
model = first_output[0]
|
||||
# NubeBuster fork sets attachments["source_model"] on the ModelPatcher
|
||||
source_model = getattr(model, 'attachments', {}).get("source_model")
|
||||
if source_model:
|
||||
_store_checkpoint_metadata(metadata, node_id, source_model)
|
||||
|
||||
|
||||
class LoraLoaderManagerExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
@@ -577,8 +1005,6 @@ class SamplerCustomAdvancedExtractor(BaseSamplerExtractor):
|
||||
# Extract latent dimensions
|
||||
BaseSamplerExtractor.extract_latent_dimensions(node_id, inputs, metadata)
|
||||
|
||||
import json
|
||||
|
||||
class CLIPTextEncodeFluxExtractor(NodeMetadataExtractor):
|
||||
@staticmethod
|
||||
def extract(node_id, inputs, outputs, metadata):
|
||||
@@ -699,9 +1125,12 @@ NODE_EXTRACTORS = {
|
||||
"KSamplerSelect": KSamplerSelectExtractor, # Add KSamplerSelect
|
||||
"BasicScheduler": BasicSchedulerExtractor, # Add BasicScheduler
|
||||
"AlignYourStepsScheduler": BasicSchedulerExtractor, # Add AlignYourStepsScheduler
|
||||
# ComfyUI-Easy-Use pre-sampling / seed
|
||||
"samplerSettings": EasyPreSamplingExtractor, # easy preSampling
|
||||
"easySeed": EasySeedExtractor, # easy seed
|
||||
# Loaders
|
||||
"CheckpointLoaderSimple": CheckpointLoaderExtractor,
|
||||
"comfyLoader": CheckpointLoaderExtractor, # easy comfyLoader
|
||||
"comfyLoader": EasyComfyLoaderExtractor, # ComfyUI-Easy-Use easy comfyLoader
|
||||
"CheckpointLoaderSimpleWithImages": CheckpointLoaderExtractor, # CheckpointLoader|pysssss
|
||||
"TSC_EfficientLoader": TSCCheckpointLoaderExtractor, # Efficient Nodes
|
||||
"NunchakuFluxDiTLoader": NunchakuFluxDiTLoaderExtractor, # ComfyUI-Nunchaku
|
||||
@@ -711,12 +1140,17 @@ NODE_EXTRACTORS = {
|
||||
"GGUFLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
|
||||
"DiffusionModelLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
|
||||
"CheckpointLoaderKJ": CheckpointLoaderExtractor, # KJNodes
|
||||
"CheckpointLoaderLM": CheckpointLoaderExtractor, # LoRA Manager
|
||||
"UNETLoader": UNETLoaderExtractor, # Updated to use dedicated extractor
|
||||
"UnetLoaderGGUF": UNETLoaderExtractor, # Updated to use dedicated extractor
|
||||
"UNETLoaderLM": UNETLoaderExtractor, # LoRA Manager
|
||||
"LoraLoader": LoraLoaderExtractor,
|
||||
"LoraLoaderLM": LoraLoaderManagerExtractor,
|
||||
"RgthreePowerLoraLoader": RgthreePowerLoraLoaderExtractor,
|
||||
"TensorRTLoader": TensorRTLoaderExtractor,
|
||||
# Conditioning
|
||||
"CLIPTextEncode": CLIPTextEncodeExtractor,
|
||||
"CLIPTextEncodeAttentionBias": CLIPTextEncodeExtractor, # From https://github.com/silveroxides/ComfyUI_PromptAttention
|
||||
"PromptLM": CLIPTextEncodeExtractor,
|
||||
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
|
||||
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
|
||||
@@ -724,6 +1158,12 @@ NODE_EXTRACTORS = {
|
||||
"smZ_CLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/shiimizu/ComfyUI_smZNodes
|
||||
"CR_ApplyControlNetStack": CR_ApplyControlNetStackExtractor, # Add CR_ApplyControlNetStack
|
||||
"PCTextEncode": CLIPTextEncodeExtractor, # From https://github.com/asagi4/comfyui-prompt-control
|
||||
"TextProvider": MyOriginalWaifuTextExtractor, # ComfyUI-MyOriginalWaifu
|
||||
"ClipProvider": MyOriginalWaifuClipExtractor, # ComfyUI-MyOriginalWaifu
|
||||
"ControlNetApplyAdvanced": ControlNetApplyAdvancedExtractor,
|
||||
"ConditioningCombine": ConditioningCombineExtractor,
|
||||
"SetNode": SetNodeExtractor,
|
||||
"GetNode": GetNodeExtractor,
|
||||
# Latent
|
||||
"EmptyLatentImage": ImageSizeExtractor,
|
||||
# Flux
|
||||
|
||||
@@ -4,15 +4,21 @@ from typing import Awaitable, Callable, Dict, List
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
# Use wildcard for CivitAI to support their CDN subdomains (e.g., image-b2.civitai.com)
|
||||
# Security note: This is acceptable because:
|
||||
# 1. CSP img-src only controls image/video loading, not script execution
|
||||
# 2. All *.civitai.com subdomains are controlled by Civitai
|
||||
# 3. Explicit domain list would require constant updates as Civitai adds CDN nodes
|
||||
REMOTE_MEDIA_SOURCES = (
|
||||
"https://image.civitai.com",
|
||||
"https://*.civitai.com",
|
||||
"https://img.genur.art",
|
||||
)
|
||||
|
||||
|
||||
@web.middleware
|
||||
async def relax_csp_for_remote_media(
|
||||
request: web.Request, handler: Callable[[web.Request], Awaitable[web.StreamResponse]]
|
||||
request: web.Request,
|
||||
handler: Callable[[web.Request], Awaitable[web.StreamResponse]],
|
||||
) -> web.StreamResponse:
|
||||
"""Allow LoRA Manager media previews to load from trusted remote domains.
|
||||
|
||||
@@ -43,7 +49,9 @@ async def relax_csp_for_remote_media(
|
||||
directive_order.append(name)
|
||||
directives[name] = values
|
||||
|
||||
def merge_sources(name: str, sources: List[str], defaults: List[str] | None = None) -> None:
|
||||
def merge_sources(
|
||||
name: str, sources: List[str], defaults: List[str] | None = None
|
||||
) -> None:
|
||||
existing = directives.get(name, list(defaults or []))
|
||||
|
||||
for source in sources:
|
||||
|
||||
@@ -8,6 +8,7 @@ and tracks the cycle progress which persists across workflow save/load.
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
||||
from ..utils.utils import get_lora_info
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -54,6 +55,9 @@ class LoraCyclerLM:
|
||||
current_index = cycler_config.get("current_index", 1) # 1-based
|
||||
model_strength = float(cycler_config.get("model_strength", 1.0))
|
||||
clip_strength = float(cycler_config.get("clip_strength", 1.0))
|
||||
use_same_clip_strength = cycler_config.get("use_same_clip_strength", True)
|
||||
use_preset_strength = cycler_config.get("use_preset_strength", False)
|
||||
preset_strength_scale = float(cycler_config.get("preset_strength_scale", 1.0))
|
||||
sort_by = "filename"
|
||||
|
||||
# Include "no lora" option
|
||||
@@ -131,6 +135,39 @@ class LoraCyclerLM:
|
||||
else:
|
||||
# Normalize path separators
|
||||
lora_path = lora_path.replace("/", os.sep)
|
||||
|
||||
if use_preset_strength:
|
||||
lora_metadata = await lora_service.get_lora_metadata_by_filename(
|
||||
current_lora["file_name"]
|
||||
)
|
||||
if lora_metadata:
|
||||
recommended_strength = (
|
||||
lora_service.get_recommended_strength_from_lora_data(
|
||||
lora_metadata
|
||||
)
|
||||
)
|
||||
if recommended_strength is not None:
|
||||
model_strength = round(
|
||||
recommended_strength * preset_strength_scale, 2
|
||||
)
|
||||
|
||||
if use_same_clip_strength:
|
||||
clip_strength = model_strength
|
||||
else:
|
||||
recommended_clip_strength = (
|
||||
lora_service.get_recommended_clip_strength_from_lora_data(
|
||||
lora_metadata
|
||||
)
|
||||
)
|
||||
if recommended_clip_strength is not None:
|
||||
clip_strength = round(
|
||||
recommended_clip_strength * preset_strength_scale, 2
|
||||
)
|
||||
elif use_same_clip_strength:
|
||||
clip_strength = model_strength
|
||||
elif use_same_clip_strength:
|
||||
clip_strength = model_strength
|
||||
|
||||
lora_stack = [(lora_path, model_strength, clip_strength)]
|
||||
|
||||
# Calculate next index (wrap to 1 if at end)
|
||||
|
||||
@@ -1,22 +1,138 @@
|
||||
import importlib
|
||||
import logging
|
||||
import re
|
||||
import comfy.utils # type: ignore
|
||||
import comfy.sd # type: ignore
|
||||
|
||||
import comfy.sd # type: ignore
|
||||
import comfy.utils # type: ignore
|
||||
|
||||
from ..utils.utils import get_lora_info_absolute
|
||||
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list, nunchaku_load_lora
|
||||
from .utils import (
|
||||
FlexibleOptionalInputType,
|
||||
any_type,
|
||||
detect_nunchaku_model_kind,
|
||||
extract_lora_name,
|
||||
get_loras_list,
|
||||
nunchaku_load_lora,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _get_nunchaku_load_qwen_loras():
|
||||
try:
|
||||
module = importlib.import_module(".nunchaku_qwen", __package__)
|
||||
except ImportError as exc:
|
||||
raise RuntimeError(
|
||||
"Qwen-Image LoRA loading requires the ComfyUI runtime with its torch dependency available."
|
||||
) from exc
|
||||
return module.nunchaku_load_qwen_loras
|
||||
|
||||
|
||||
def _collect_stack_entries(lora_stack):
|
||||
entries = []
|
||||
if not lora_stack:
|
||||
return entries
|
||||
|
||||
for lora_path, model_strength, clip_strength in lora_stack:
|
||||
lora_name = extract_lora_name(lora_path)
|
||||
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
entries.append({
|
||||
"name": lora_name,
|
||||
"absolute_path": absolute_lora_path,
|
||||
"input_path": lora_path,
|
||||
"model_strength": float(model_strength),
|
||||
"clip_strength": float(clip_strength),
|
||||
"trigger_words": trigger_words,
|
||||
})
|
||||
return entries
|
||||
|
||||
|
||||
def _collect_widget_entries(kwargs):
|
||||
entries = []
|
||||
for lora in get_loras_list(kwargs):
|
||||
if not lora.get("active", False):
|
||||
continue
|
||||
lora_name = lora["name"]
|
||||
model_strength = float(lora["strength"])
|
||||
clip_strength = float(lora.get("clipStrength", model_strength))
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
entries.append({
|
||||
"name": lora_name,
|
||||
"absolute_path": lora_path,
|
||||
"input_path": lora_path,
|
||||
"model_strength": model_strength,
|
||||
"clip_strength": clip_strength,
|
||||
"trigger_words": trigger_words,
|
||||
})
|
||||
return entries
|
||||
|
||||
|
||||
def _format_loaded_loras(loaded_loras):
|
||||
formatted_loras = []
|
||||
for item in loaded_loras:
|
||||
if item["include_clip_strength"]:
|
||||
formatted_loras.append(
|
||||
f"<lora:{item['name']}:{item['model_strength']}:{item['clip_strength']}>"
|
||||
)
|
||||
else:
|
||||
formatted_loras.append(f"<lora:{item['name']}:{item['model_strength']}>")
|
||||
return " ".join(formatted_loras)
|
||||
|
||||
|
||||
def _apply_entries(model, clip, lora_entries, nunchaku_model_kind):
|
||||
loaded_loras = []
|
||||
all_trigger_words = []
|
||||
|
||||
if nunchaku_model_kind == "qwen_image":
|
||||
nunchaku_load_qwen_loras = _get_nunchaku_load_qwen_loras()
|
||||
qwen_lora_configs = []
|
||||
for entry in lora_entries:
|
||||
qwen_lora_configs.append((entry["absolute_path"], entry["model_strength"]))
|
||||
loaded_loras.append({
|
||||
"name": entry["name"],
|
||||
"model_strength": entry["model_strength"],
|
||||
"clip_strength": entry["model_strength"],
|
||||
"include_clip_strength": False,
|
||||
})
|
||||
all_trigger_words.extend(entry["trigger_words"])
|
||||
if qwen_lora_configs:
|
||||
model = nunchaku_load_qwen_loras(model, qwen_lora_configs)
|
||||
return model, clip, loaded_loras, all_trigger_words
|
||||
|
||||
for entry in lora_entries:
|
||||
if nunchaku_model_kind == "flux":
|
||||
model = nunchaku_load_lora(model, entry["input_path"], entry["model_strength"])
|
||||
else:
|
||||
lora = comfy.utils.load_torch_file(entry["absolute_path"], safe_load=True)
|
||||
model, clip = comfy.sd.load_lora_for_models(
|
||||
model,
|
||||
clip,
|
||||
lora,
|
||||
entry["model_strength"],
|
||||
entry["clip_strength"],
|
||||
)
|
||||
|
||||
include_clip_strength = nunchaku_model_kind is None and abs(entry["model_strength"] - entry["clip_strength"]) > 0.001
|
||||
loaded_loras.append({
|
||||
"name": entry["name"],
|
||||
"model_strength": entry["model_strength"],
|
||||
"clip_strength": entry["clip_strength"],
|
||||
"include_clip_strength": include_clip_strength,
|
||||
})
|
||||
all_trigger_words.extend(entry["trigger_words"])
|
||||
|
||||
return model, clip, loaded_loras, all_trigger_words
|
||||
|
||||
|
||||
class LoraLoaderLM:
|
||||
NAME = "Lora Loader (LoraManager)"
|
||||
CATEGORY = "Lora Manager/loaders"
|
||||
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"model": ("MODEL",),
|
||||
# "clip": ("CLIP",),
|
||||
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
|
||||
"placeholder": "Search LoRAs to add...",
|
||||
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
|
||||
@@ -28,114 +144,30 @@ class LoraLoaderLM:
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
|
||||
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
|
||||
FUNCTION = "load_loras"
|
||||
|
||||
|
||||
def load_loras(self, model, text, **kwargs):
|
||||
"""Loads multiple LoRAs based on the kwargs input and lora_stack."""
|
||||
loaded_loras = []
|
||||
all_trigger_words = []
|
||||
|
||||
clip = kwargs.get('clip', None)
|
||||
lora_stack = kwargs.get('lora_stack', None)
|
||||
|
||||
# Check if model is a Nunchaku Flux model - simplified approach
|
||||
is_nunchaku_model = False
|
||||
|
||||
try:
|
||||
model_wrapper = model.model.diffusion_model
|
||||
# Check if model is a Nunchaku Flux model using only class name
|
||||
if model_wrapper.__class__.__name__ == "ComfyFluxWrapper":
|
||||
is_nunchaku_model = True
|
||||
logger.info("Detected Nunchaku Flux model")
|
||||
except (AttributeError, TypeError):
|
||||
# Not a model with the expected structure
|
||||
pass
|
||||
|
||||
# First process lora_stack if available
|
||||
if lora_stack:
|
||||
for lora_path, model_strength, clip_strength in lora_stack:
|
||||
# Extract lora name and convert to absolute path
|
||||
# lora_stack stores relative paths, but load_torch_file needs absolute paths
|
||||
lora_name = extract_lora_name(lora_path)
|
||||
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
|
||||
# Apply the LoRA using the appropriate loader
|
||||
if is_nunchaku_model:
|
||||
# Use our custom function for Flux models
|
||||
model = nunchaku_load_lora(model, lora_path, model_strength)
|
||||
# clip remains unchanged for Nunchaku models
|
||||
else:
|
||||
# Use lower-level API to load LoRA directly without folder_paths validation
|
||||
lora = comfy.utils.load_torch_file(absolute_lora_path, safe_load=True)
|
||||
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
|
||||
|
||||
all_trigger_words.extend(trigger_words)
|
||||
# Add clip strength to output if different from model strength (except for Nunchaku models)
|
||||
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
|
||||
else:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength}")
|
||||
|
||||
# Then process loras from kwargs with support for both old and new formats
|
||||
loras_list = get_loras_list(kwargs)
|
||||
for lora in loras_list:
|
||||
if not lora.get('active', False):
|
||||
continue
|
||||
|
||||
lora_name = lora['name']
|
||||
model_strength = float(lora['strength'])
|
||||
# Get clip strength - use model strength as default if not specified
|
||||
clip_strength = float(lora.get('clipStrength', model_strength))
|
||||
|
||||
# Get lora path and trigger words
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
|
||||
# Apply the LoRA using the appropriate loader
|
||||
if is_nunchaku_model:
|
||||
# For Nunchaku models, use our custom function
|
||||
model = nunchaku_load_lora(model, lora_path, model_strength)
|
||||
# clip remains unchanged
|
||||
else:
|
||||
# Use lower-level API to load LoRA directly without folder_paths validation
|
||||
lora = comfy.utils.load_torch_file(lora_path, safe_load=True)
|
||||
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
|
||||
|
||||
# Include clip strength in output if different from model strength and not a Nunchaku model
|
||||
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
|
||||
else:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength}")
|
||||
|
||||
# Add trigger words to collection
|
||||
all_trigger_words.extend(trigger_words)
|
||||
|
||||
# use ',, ' to separate trigger words for group mode
|
||||
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
|
||||
|
||||
# Format loaded_loras with support for both formats
|
||||
formatted_loras = []
|
||||
for item in loaded_loras:
|
||||
parts = item.split(":")
|
||||
lora_name = parts[0]
|
||||
strength_parts = parts[1].strip().split(",")
|
||||
|
||||
if len(strength_parts) > 1:
|
||||
# Different model and clip strengths
|
||||
model_str = strength_parts[0].strip()
|
||||
clip_str = strength_parts[1].strip()
|
||||
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
|
||||
else:
|
||||
# Same strength for both
|
||||
model_str = strength_parts[0].strip()
|
||||
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
|
||||
|
||||
formatted_loras_text = " ".join(formatted_loras)
|
||||
del text
|
||||
clip = kwargs.get("clip", None)
|
||||
lora_entries = _collect_stack_entries(kwargs.get("lora_stack", None))
|
||||
lora_entries.extend(_collect_widget_entries(kwargs))
|
||||
|
||||
nunchaku_model_kind = detect_nunchaku_model_kind(model)
|
||||
if nunchaku_model_kind == "flux":
|
||||
logger.info("Detected Nunchaku Flux model")
|
||||
elif nunchaku_model_kind == "qwen_image":
|
||||
logger.info("Detected Nunchaku Qwen-Image model")
|
||||
|
||||
model, clip, loaded_loras, all_trigger_words = _apply_entries(model, clip, lora_entries, nunchaku_model_kind)
|
||||
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
|
||||
formatted_loras_text = _format_loaded_loras(loaded_loras)
|
||||
return (model, clip, trigger_words_text, formatted_loras_text)
|
||||
|
||||
|
||||
class LoraTextLoaderLM:
|
||||
NAME = "LoRA Text Loader (LoraManager)"
|
||||
CATEGORY = "Lora Manager/loaders"
|
||||
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
@@ -143,131 +175,55 @@ class LoraTextLoaderLM:
|
||||
"model": ("MODEL",),
|
||||
"lora_syntax": ("STRING", {
|
||||
"forceInput": True,
|
||||
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation"
|
||||
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
|
||||
}),
|
||||
},
|
||||
"optional": {
|
||||
"clip": ("CLIP",),
|
||||
"lora_stack": ("LORA_STACK",),
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
|
||||
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
|
||||
FUNCTION = "load_loras_from_text"
|
||||
|
||||
|
||||
def parse_lora_syntax(self, text):
|
||||
"""Parse LoRA syntax from text input."""
|
||||
# Pattern to match <lora:name:strength> or <lora:name:model_strength:clip_strength>
|
||||
pattern = r'<lora:([^:>]+):([^:>]+)(?::([^:>]+))?>'
|
||||
pattern = r"<lora:([^:>]+):([^:>]+)(?::([^:>]+))?>"
|
||||
matches = re.findall(pattern, text, re.IGNORECASE)
|
||||
|
||||
|
||||
loras = []
|
||||
for match in matches:
|
||||
lora_name = match[0]
|
||||
model_strength = float(match[1])
|
||||
clip_strength = float(match[2]) if match[2] else model_strength
|
||||
|
||||
loras.append({
|
||||
'name': lora_name,
|
||||
'model_strength': model_strength,
|
||||
'clip_strength': clip_strength
|
||||
"name": match[0],
|
||||
"model_strength": model_strength,
|
||||
"clip_strength": float(match[2]) if match[2] else model_strength,
|
||||
})
|
||||
|
||||
return loras
|
||||
|
||||
|
||||
def load_loras_from_text(self, model, lora_syntax, clip=None, lora_stack=None):
|
||||
"""Load LoRAs based on text syntax input."""
|
||||
loaded_loras = []
|
||||
all_trigger_words = []
|
||||
|
||||
# Check if model is a Nunchaku Flux model - simplified approach
|
||||
is_nunchaku_model = False
|
||||
|
||||
try:
|
||||
model_wrapper = model.model.diffusion_model
|
||||
# Check if model is a Nunchaku Flux model using only class name
|
||||
if model_wrapper.__class__.__name__ == "ComfyFluxWrapper":
|
||||
is_nunchaku_model = True
|
||||
logger.info("Detected Nunchaku Flux model")
|
||||
except (AttributeError, TypeError):
|
||||
# Not a model with the expected structure
|
||||
pass
|
||||
|
||||
# First process lora_stack if available
|
||||
if lora_stack:
|
||||
for lora_path, model_strength, clip_strength in lora_stack:
|
||||
# Extract lora name and convert to absolute path
|
||||
# lora_stack stores relative paths, but load_torch_file needs absolute paths
|
||||
lora_name = extract_lora_name(lora_path)
|
||||
absolute_lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
|
||||
# Apply the LoRA using the appropriate loader
|
||||
if is_nunchaku_model:
|
||||
# Use our custom function for Flux models
|
||||
model = nunchaku_load_lora(model, lora_path, model_strength)
|
||||
# clip remains unchanged for Nunchaku models
|
||||
else:
|
||||
# Use lower-level API to load LoRA directly without folder_paths validation
|
||||
lora = comfy.utils.load_torch_file(absolute_lora_path, safe_load=True)
|
||||
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
|
||||
|
||||
all_trigger_words.extend(trigger_words)
|
||||
# Add clip strength to output if different from model strength (except for Nunchaku models)
|
||||
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
|
||||
else:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength}")
|
||||
|
||||
# Parse and process LoRAs from text syntax
|
||||
parsed_loras = self.parse_lora_syntax(lora_syntax)
|
||||
for lora in parsed_loras:
|
||||
lora_name = lora['name']
|
||||
model_strength = lora['model_strength']
|
||||
clip_strength = lora['clip_strength']
|
||||
|
||||
# Get lora path and trigger words
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
|
||||
# Apply the LoRA using the appropriate loader
|
||||
if is_nunchaku_model:
|
||||
# For Nunchaku models, use our custom function
|
||||
model = nunchaku_load_lora(model, lora_path, model_strength)
|
||||
# clip remains unchanged
|
||||
else:
|
||||
# Use lower-level API to load LoRA directly without folder_paths validation
|
||||
lora = comfy.utils.load_torch_file(lora_path, safe_load=True)
|
||||
model, clip = comfy.sd.load_lora_for_models(model, clip, lora, model_strength, clip_strength)
|
||||
|
||||
# Include clip strength in output if different from model strength and not a Nunchaku model
|
||||
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
|
||||
else:
|
||||
loaded_loras.append(f"{lora_name}: {model_strength}")
|
||||
|
||||
# Add trigger words to collection
|
||||
all_trigger_words.extend(trigger_words)
|
||||
|
||||
# use ',, ' to separate trigger words for group mode
|
||||
lora_entries = _collect_stack_entries(lora_stack)
|
||||
for lora in self.parse_lora_syntax(lora_syntax):
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora["name"])
|
||||
lora_entries.append({
|
||||
"name": lora["name"],
|
||||
"absolute_path": lora_path,
|
||||
"input_path": lora_path,
|
||||
"model_strength": lora["model_strength"],
|
||||
"clip_strength": lora["clip_strength"],
|
||||
"trigger_words": trigger_words,
|
||||
})
|
||||
|
||||
nunchaku_model_kind = detect_nunchaku_model_kind(model)
|
||||
if nunchaku_model_kind == "flux":
|
||||
logger.info("Detected Nunchaku Flux model")
|
||||
elif nunchaku_model_kind == "qwen_image":
|
||||
logger.info("Detected Nunchaku Qwen-Image model")
|
||||
|
||||
model, clip, loaded_loras, all_trigger_words = _apply_entries(model, clip, lora_entries, nunchaku_model_kind)
|
||||
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
|
||||
|
||||
# Format loaded_loras with support for both formats
|
||||
formatted_loras = []
|
||||
for item in loaded_loras:
|
||||
parts = item.split(":")
|
||||
lora_name = parts[0].strip()
|
||||
strength_parts = parts[1].strip().split(",")
|
||||
|
||||
if len(strength_parts) > 1:
|
||||
# Different model and clip strengths
|
||||
model_str = strength_parts[0].strip()
|
||||
clip_str = strength_parts[1].strip()
|
||||
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
|
||||
else:
|
||||
# Same strength for both
|
||||
model_str = strength_parts[0].strip()
|
||||
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
|
||||
|
||||
formatted_loras_text = " ".join(formatted_loras)
|
||||
|
||||
return (model, clip, trigger_words_text, formatted_loras_text)
|
||||
formatted_loras_text = _format_loaded_loras(loaded_loras)
|
||||
return (model, clip, trigger_words_text, formatted_loras_text)
|
||||
|
||||
26
py/nodes/lora_stack_combiner.py
Normal file
26
py/nodes/lora_stack_combiner.py
Normal file
@@ -0,0 +1,26 @@
|
||||
class LoraStackCombinerLM:
|
||||
NAME = "Lora Stack Combiner (LoraManager)"
|
||||
CATEGORY = "Lora Manager/stackers"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"lora_stack_a": ("LORA_STACK",),
|
||||
"lora_stack_b": ("LORA_STACK",),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("LORA_STACK",)
|
||||
RETURN_NAMES = ("LORA_STACK",)
|
||||
FUNCTION = "combine_stacks"
|
||||
|
||||
def combine_stacks(self, lora_stack_a, lora_stack_b):
|
||||
combined_stack = []
|
||||
|
||||
if lora_stack_a:
|
||||
combined_stack.extend(lora_stack_a)
|
||||
if lora_stack_b:
|
||||
combined_stack.extend(lora_stack_b)
|
||||
|
||||
return (combined_stack,)
|
||||
570
py/nodes/nunchaku_qwen.py
Normal file
570
py/nodes/nunchaku_qwen.py
Normal file
@@ -0,0 +1,570 @@
|
||||
from __future__ import annotations
|
||||
|
||||
"""Qwen-Image LoRA support for Nunchaku models.
|
||||
|
||||
Portions of the LoRA mapping/application logic in this file are adapted from
|
||||
ComfyUI-QwenImageLoraLoader by GitHub user ussoewwin:
|
||||
https://github.com/ussoewwin/ComfyUI-QwenImageLoraLoader
|
||||
|
||||
The upstream project is licensed under Apache License 2.0.
|
||||
"""
|
||||
|
||||
import copy
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple, Union
|
||||
|
||||
import comfy.utils # type: ignore
|
||||
import folder_paths # type: ignore
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from safetensors import safe_open
|
||||
|
||||
from nunchaku.lora.flux.nunchaku_converter import (
|
||||
pack_lowrank_weight,
|
||||
unpack_lowrank_weight,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
KEY_MAPPING = [
|
||||
(re.compile(r"^(layers)[._](\d+)[._]attention[._]to[._]([qkv])$"), r"\1.\2.attention.to_qkv", "qkv", lambda m: m.group(3).upper()),
|
||||
(re.compile(r"^(layers)[._](\d+)[._]feed_forward[._](w1|w3)$"), r"\1.\2.feed_forward.net.0.proj", "glu", lambda m: m.group(3)),
|
||||
(re.compile(r"^(layers)[._](\d+)[._]feed_forward[._]w2$"), r"\1.\2.feed_forward.net.2", "regular", None),
|
||||
(re.compile(r"^(layers)[._](\d+)[._](.*)$"), r"\1.\2.\3", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]to[._]([qkv])$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._](q|k|v)[._]proj$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]add[._](q|k|v)[._]proj$"), r"\1.\2.attn.add_qkv_proj", "add_qkv", lambda m: m.group(3).upper()),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]out[._]proj[._]context$"), r"\1.\2.attn.to_add_out", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]out[._]proj$"), r"\1.\2.attn.to_out.0", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]attn[._]to[._]out$"), r"\1.\2.attn.to_out.0", "regular", None),
|
||||
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]attn[._]to[._]([qkv])$"), r"\1.\2.attn.to_qkv", "qkv", lambda m: m.group(3).upper()),
|
||||
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]attn[._]to[._]out$"), r"\1.\2.attn.to_out", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff[._]net[._]0(?:[._]proj)?$"), r"\1.\2.mlp_fc1", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff[._]net[._]2$"), r"\1.\2.mlp_fc2", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff_context[._]net[._]0(?:[._]proj)?$"), r"\1.\2.mlp_context_fc1", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]ff_context[._]net[._]2$"), r"\1.\2.mlp_context_fc2", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mlp)[._](net)[._](0)[._](proj)$"), r"\1.\2.\3.\4.\5.\6", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mlp)[._](net)[._](2)$"), r"\1.\2.\3.\4.\5", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mlp)[._](net)[._](0)[._](proj)$"), r"\1.\2.\3.\4.\5.\6", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mlp)[._](net)[._](2)$"), r"\1.\2.\3.\4.\5", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](img_mod)[._](1)$"), r"\1.\2.\3.\4", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._](txt_mod)[._](1)$"), r"\1.\2.\3.\4", "regular", None),
|
||||
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]proj[._]out$"), r"\1.\2.proj_out", "single_proj_out", None),
|
||||
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]proj[._]mlp$"), r"\1.\2.mlp_fc1", "regular", None),
|
||||
(re.compile(r"^(single_transformer_blocks)[._](\d+)[._]norm[._]linear$"), r"\1.\2.norm.linear", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]norm1[._]linear$"), r"\1.\2.norm1.linear", "regular", None),
|
||||
(re.compile(r"^(transformer_blocks)[._](\d+)[._]norm1_context[._]linear$"), r"\1.\2.norm1_context.linear", "regular", None),
|
||||
(re.compile(r"^(img_in)$"), r"\1", "regular", None),
|
||||
(re.compile(r"^(txt_in)$"), r"\1", "regular", None),
|
||||
(re.compile(r"^(proj_out)$"), r"\1", "regular", None),
|
||||
(re.compile(r"^(norm_out)[._](linear)$"), r"\1.\2", "regular", None),
|
||||
(re.compile(r"^(time_text_embed)[._](timestep_embedder)[._](linear_1)$"), r"\1.\2.\3", "regular", None),
|
||||
(re.compile(r"^(time_text_embed)[._](timestep_embedder)[._](linear_2)$"), r"\1.\2.\3", "regular", None),
|
||||
]
|
||||
|
||||
_RE_LORA_SUFFIX = re.compile(r"\.(?P<tag>lora(?:[._](?:A|B|down|up)))(?:\.[^.]+)*\.weight$")
|
||||
_RE_ALPHA_SUFFIX = re.compile(r"\.(?:alpha|lora_alpha)(?:\.[^.]+)*$")
|
||||
|
||||
|
||||
def _rename_layer_underscore_layer_name(old_name: str) -> str:
|
||||
rules = [
|
||||
(r"_(\d+)_attn_to_out_(\d+)", r".\1.attn.to_out.\2"),
|
||||
(r"_(\d+)_img_mlp_net_(\d+)_proj", r".\1.img_mlp.net.\2.proj"),
|
||||
(r"_(\d+)_txt_mlp_net_(\d+)_proj", r".\1.txt_mlp.net.\2.proj"),
|
||||
(r"_(\d+)_img_mlp_net_(\d+)", r".\1.img_mlp.net.\2"),
|
||||
(r"_(\d+)_txt_mlp_net_(\d+)", r".\1.txt_mlp.net.\2"),
|
||||
(r"_(\d+)_img_mod_(\d+)", r".\1.img_mod.\2"),
|
||||
(r"_(\d+)_txt_mod_(\d+)", r".\1.txt_mod.\2"),
|
||||
(r"_(\d+)_attn_", r".\1.attn."),
|
||||
]
|
||||
new_name = old_name
|
||||
for pattern, replacement in rules:
|
||||
new_name = re.sub(pattern, replacement, new_name)
|
||||
return new_name
|
||||
|
||||
|
||||
def _is_indexable_module(module):
|
||||
return isinstance(module, (nn.ModuleList, nn.Sequential, list, tuple))
|
||||
|
||||
|
||||
def _get_module_by_name(model: nn.Module, name: str) -> Optional[nn.Module]:
|
||||
if not name:
|
||||
return model
|
||||
module = model
|
||||
for part in name.split("."):
|
||||
if not part:
|
||||
continue
|
||||
if hasattr(module, part):
|
||||
module = getattr(module, part)
|
||||
elif part.isdigit() and _is_indexable_module(module):
|
||||
try:
|
||||
module = module[int(part)]
|
||||
except (IndexError, TypeError):
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
return module
|
||||
|
||||
|
||||
def _resolve_module_name(model: nn.Module, name: str) -> Tuple[str, Optional[nn.Module]]:
|
||||
module = _get_module_by_name(model, name)
|
||||
if module is not None:
|
||||
return name, module
|
||||
|
||||
replacements = [
|
||||
(".attn.to_out.0", ".attn.to_out"),
|
||||
(".attention.to_qkv", ".attention.qkv"),
|
||||
(".attention.to_out.0", ".attention.out"),
|
||||
(".feed_forward.net.0.proj", ".feed_forward.w13"),
|
||||
(".feed_forward.net.2", ".feed_forward.w2"),
|
||||
(".ff.net.0.proj", ".mlp_fc1"),
|
||||
(".ff.net.2", ".mlp_fc2"),
|
||||
(".ff_context.net.0.proj", ".mlp_context_fc1"),
|
||||
(".ff_context.net.2", ".mlp_context_fc2"),
|
||||
]
|
||||
for src, dst in replacements:
|
||||
if src in name:
|
||||
alt = name.replace(src, dst)
|
||||
module = _get_module_by_name(model, alt)
|
||||
if module is not None:
|
||||
return alt, module
|
||||
return name, None
|
||||
|
||||
|
||||
def _classify_and_map_key(key: str) -> Optional[Tuple[str, str, Optional[str], str]]:
|
||||
normalized = key
|
||||
if normalized.startswith("transformer."):
|
||||
normalized = normalized[len("transformer."):]
|
||||
if normalized.startswith("diffusion_model."):
|
||||
normalized = normalized[len("diffusion_model."):]
|
||||
if normalized.startswith("lora_unet_"):
|
||||
normalized = _rename_layer_underscore_layer_name(normalized[len("lora_unet_"):])
|
||||
|
||||
match = _RE_LORA_SUFFIX.search(normalized)
|
||||
if match:
|
||||
tag = match.group("tag")
|
||||
base = normalized[:match.start()]
|
||||
ab = "A" if ("lora_A" in tag or tag.endswith(".A") or "down" in tag) else "B"
|
||||
else:
|
||||
match = _RE_ALPHA_SUFFIX.search(normalized)
|
||||
if not match:
|
||||
return None
|
||||
base = normalized[:match.start()]
|
||||
ab = "alpha"
|
||||
|
||||
for pattern, template, group, comp_fn in KEY_MAPPING:
|
||||
key_match = pattern.match(base)
|
||||
if key_match:
|
||||
return group, key_match.expand(template), comp_fn(key_match) if comp_fn else None, ab
|
||||
return None
|
||||
|
||||
|
||||
def _detect_lora_format(lora_state_dict: Dict[str, torch.Tensor]) -> bool:
|
||||
standard_patterns = (
|
||||
".lora_up.",
|
||||
".lora_down.",
|
||||
".lora_A.",
|
||||
".lora_B.",
|
||||
".lora.up.",
|
||||
".lora.down.",
|
||||
".lora.A.",
|
||||
".lora.B.",
|
||||
)
|
||||
return any(pattern in key for key in lora_state_dict for pattern in standard_patterns)
|
||||
|
||||
|
||||
def _load_lora_state_dict(path_or_dict: Union[str, Path, Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
|
||||
if isinstance(path_or_dict, dict):
|
||||
return path_or_dict
|
||||
path = Path(path_or_dict)
|
||||
if path.suffix == ".safetensors":
|
||||
state_dict: Dict[str, torch.Tensor] = {}
|
||||
with safe_open(path, framework="pt", device="cpu") as handle:
|
||||
for key in handle.keys():
|
||||
state_dict[key] = handle.get_tensor(key)
|
||||
return state_dict
|
||||
return comfy.utils.load_torch_file(str(path), safe_load=True)
|
||||
|
||||
|
||||
def _fuse_glu_lora(glu_weights: Dict[str, torch.Tensor]) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
|
||||
if "w1_A" not in glu_weights or "w3_A" not in glu_weights:
|
||||
return None, None, None
|
||||
a_w1, b_w1 = glu_weights["w1_A"], glu_weights["w1_B"]
|
||||
a_w3, b_w3 = glu_weights["w3_A"], glu_weights["w3_B"]
|
||||
if a_w1.shape[1] != a_w3.shape[1]:
|
||||
return None, None, None
|
||||
a_fused = torch.cat([a_w1, a_w3], dim=0)
|
||||
out1, out3 = b_w1.shape[0], b_w3.shape[0]
|
||||
rank1, rank3 = b_w1.shape[1], b_w3.shape[1]
|
||||
b_fused = torch.zeros(out1 + out3, rank1 + rank3, dtype=b_w1.dtype, device=b_w1.device)
|
||||
b_fused[:out1, :rank1] = b_w1
|
||||
b_fused[out1:, rank1:] = b_w3
|
||||
return a_fused, b_fused, glu_weights.get("w1_alpha")
|
||||
|
||||
|
||||
def _fuse_qkv_lora(qkv_weights: Dict[str, torch.Tensor], model: Optional[nn.Module] = None, base_key: Optional[str] = None) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
|
||||
required_keys = ["Q_A", "Q_B", "K_A", "K_B", "V_A", "V_B"]
|
||||
if not all(key in qkv_weights for key in required_keys):
|
||||
return None, None, None
|
||||
a_q, a_k, a_v = qkv_weights["Q_A"], qkv_weights["K_A"], qkv_weights["V_A"]
|
||||
b_q, b_k, b_v = qkv_weights["Q_B"], qkv_weights["K_B"], qkv_weights["V_B"]
|
||||
if not (a_q.shape == a_k.shape == a_v.shape):
|
||||
return None, None, None
|
||||
if not (b_q.shape[1] == b_k.shape[1] == b_v.shape[1]):
|
||||
return None, None, None
|
||||
|
||||
out_features = None
|
||||
if model is not None and base_key is not None:
|
||||
_, module = _resolve_module_name(model, base_key)
|
||||
out_features = getattr(module, "out_features", None) if module is not None else None
|
||||
|
||||
alpha_fused = None
|
||||
alpha_q = qkv_weights.get("Q_alpha")
|
||||
alpha_k = qkv_weights.get("K_alpha")
|
||||
alpha_v = qkv_weights.get("V_alpha")
|
||||
if alpha_q is not None and alpha_k is not None and alpha_v is not None and alpha_q.item() == alpha_k.item() == alpha_v.item():
|
||||
alpha_fused = alpha_q
|
||||
|
||||
a_fused = torch.cat([a_q, a_k, a_v], dim=0)
|
||||
rank = b_q.shape[1]
|
||||
out_q, out_k, out_v = b_q.shape[0], b_k.shape[0], b_v.shape[0]
|
||||
total_out = out_features if out_features is not None else out_q + out_k + out_v
|
||||
b_fused = torch.zeros(total_out, 3 * rank, dtype=b_q.dtype, device=b_q.device)
|
||||
b_fused[:out_q, :rank] = b_q
|
||||
b_fused[out_q:out_q + out_k, rank:2 * rank] = b_k
|
||||
b_fused[out_q + out_k:out_q + out_k + out_v, 2 * rank:] = b_v
|
||||
return a_fused, b_fused, alpha_fused
|
||||
|
||||
|
||||
def _handle_proj_out_split(lora_dict: Dict[str, Dict[str, torch.Tensor]], base_key: str, model: nn.Module) -> Tuple[Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]], List[str]]:
|
||||
result: Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]] = {}
|
||||
consumed: List[str] = []
|
||||
match = re.search(r"single_transformer_blocks\.(\d+)", base_key)
|
||||
if not match or base_key not in lora_dict:
|
||||
return result, consumed
|
||||
block_idx = match.group(1)
|
||||
block = _get_module_by_name(model, f"single_transformer_blocks.{block_idx}")
|
||||
if block is None:
|
||||
return result, consumed
|
||||
a_full = lora_dict[base_key].get("A")
|
||||
b_full = lora_dict[base_key].get("B")
|
||||
alpha = lora_dict[base_key].get("alpha")
|
||||
attn_to_out = getattr(getattr(block, "attn", None), "to_out", None)
|
||||
mlp_fc2 = getattr(block, "mlp_fc2", None)
|
||||
if a_full is None or b_full is None or attn_to_out is None or mlp_fc2 is None:
|
||||
return result, consumed
|
||||
attn_in = getattr(attn_to_out, "in_features", None)
|
||||
mlp_in = getattr(mlp_fc2, "in_features", None)
|
||||
if attn_in is None or mlp_in is None or a_full.shape[1] != attn_in + mlp_in:
|
||||
return result, consumed
|
||||
result[f"single_transformer_blocks.{block_idx}.attn.to_out"] = (a_full[:, :attn_in], b_full.clone(), alpha)
|
||||
result[f"single_transformer_blocks.{block_idx}.mlp_fc2"] = (a_full[:, attn_in:], b_full.clone(), alpha)
|
||||
consumed.append(base_key)
|
||||
return result, consumed
|
||||
|
||||
|
||||
def _apply_lora_to_module(module: nn.Module, a_tensor: torch.Tensor, b_tensor: torch.Tensor, module_name: str, model: nn.Module) -> None:
|
||||
if not hasattr(module, "in_features") or not hasattr(module, "out_features"):
|
||||
raise ValueError(f"{module_name}: unsupported module without in/out features")
|
||||
if a_tensor.shape[1] != module.in_features or b_tensor.shape[0] != module.out_features:
|
||||
raise ValueError(f"{module_name}: LoRA shape mismatch")
|
||||
|
||||
if module.__class__.__name__ == "AWQW4A16Linear" and hasattr(module, "qweight"):
|
||||
if not hasattr(module, "_lora_original_forward"):
|
||||
module._lora_original_forward = module.forward
|
||||
if not hasattr(module, "_nunchaku_lora_bundle"):
|
||||
module._nunchaku_lora_bundle = []
|
||||
module._nunchaku_lora_bundle.append((a_tensor, b_tensor))
|
||||
|
||||
def _awq_lora_forward(x, *args, **kwargs):
|
||||
out = module._lora_original_forward(x, *args, **kwargs)
|
||||
x_flat = x.reshape(-1, module.in_features)
|
||||
for local_a, local_b in module._nunchaku_lora_bundle:
|
||||
local_a = local_a.to(device=out.device, dtype=out.dtype)
|
||||
local_b = local_b.to(device=out.device, dtype=out.dtype)
|
||||
lora_term = (x_flat @ local_a.transpose(0, 1)) @ local_b.transpose(0, 1)
|
||||
try:
|
||||
out = out + lora_term.reshape(out.shape)
|
||||
except Exception:
|
||||
pass
|
||||
return out
|
||||
|
||||
module.forward = _awq_lora_forward
|
||||
if not hasattr(model, "_lora_slots"):
|
||||
model._lora_slots = {}
|
||||
model._lora_slots[module_name] = {"type": "awq_w4a16"}
|
||||
return
|
||||
|
||||
if hasattr(module, "proj_down") and hasattr(module, "proj_up"):
|
||||
proj_down = unpack_lowrank_weight(module.proj_down.data, down=True)
|
||||
proj_up = unpack_lowrank_weight(module.proj_up.data, down=False)
|
||||
base_rank = proj_down.shape[0] if proj_down.shape[1] == module.in_features else proj_down.shape[1]
|
||||
if proj_down.shape[1] == module.in_features:
|
||||
updated_down = torch.cat([proj_down, a_tensor], dim=0)
|
||||
axis_down = 0
|
||||
else:
|
||||
updated_down = torch.cat([proj_down, a_tensor.T], dim=1)
|
||||
axis_down = 1
|
||||
updated_up = torch.cat([proj_up, b_tensor], dim=1)
|
||||
module.proj_down.data = pack_lowrank_weight(updated_down, down=True)
|
||||
module.proj_up.data = pack_lowrank_weight(updated_up, down=False)
|
||||
module.rank = base_rank + a_tensor.shape[0]
|
||||
if not hasattr(model, "_lora_slots"):
|
||||
model._lora_slots = {}
|
||||
model._lora_slots[module_name] = {
|
||||
"type": "nunchaku",
|
||||
"base_rank": base_rank,
|
||||
"axis_down": axis_down,
|
||||
}
|
||||
return
|
||||
|
||||
if isinstance(module, nn.Linear):
|
||||
if not hasattr(model, "_lora_slots"):
|
||||
model._lora_slots = {}
|
||||
if module_name not in model._lora_slots:
|
||||
model._lora_slots[module_name] = {
|
||||
"type": "linear",
|
||||
"original_weight": module.weight.detach().cpu().clone(),
|
||||
}
|
||||
module.weight.data.add_((b_tensor @ a_tensor).to(dtype=module.weight.dtype, device=module.weight.device))
|
||||
return
|
||||
|
||||
raise ValueError(f"{module_name}: unsupported module type {type(module)}")
|
||||
|
||||
|
||||
def reset_lora_v2(model: nn.Module) -> None:
|
||||
slots = getattr(model, "_lora_slots", None)
|
||||
if not slots:
|
||||
return
|
||||
for name, info in list(slots.items()):
|
||||
module = _get_module_by_name(model, name)
|
||||
if module is None:
|
||||
continue
|
||||
module_type = info.get("type", "nunchaku")
|
||||
if module_type == "nunchaku":
|
||||
base_rank = info["base_rank"]
|
||||
proj_down = unpack_lowrank_weight(module.proj_down.data, down=True)
|
||||
proj_up = unpack_lowrank_weight(module.proj_up.data, down=False)
|
||||
if info.get("axis_down", 0) == 0:
|
||||
proj_down = proj_down[:base_rank, :].clone()
|
||||
else:
|
||||
proj_down = proj_down[:, :base_rank].clone()
|
||||
proj_up = proj_up[:, :base_rank].clone()
|
||||
module.proj_down.data = pack_lowrank_weight(proj_down, down=True)
|
||||
module.proj_up.data = pack_lowrank_weight(proj_up, down=False)
|
||||
module.rank = base_rank
|
||||
elif module_type == "linear" and "original_weight" in info:
|
||||
module.weight.data.copy_(info["original_weight"].to(device=module.weight.device, dtype=module.weight.dtype))
|
||||
elif module_type == "awq_w4a16":
|
||||
if hasattr(module, "_lora_original_forward"):
|
||||
module.forward = module._lora_original_forward
|
||||
for attr in ("_lora_original_forward", "_nunchaku_lora_bundle"):
|
||||
if hasattr(module, attr):
|
||||
delattr(module, attr)
|
||||
model._lora_slots = {}
|
||||
|
||||
|
||||
def compose_loras_v2(model: nn.Module, lora_configs: List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]], apply_awq_mod: bool = True) -> bool:
|
||||
del apply_awq_mod # retained for interface compatibility
|
||||
reset_lora_v2(model)
|
||||
aggregated_weights: Dict[str, List[Dict[str, object]]] = defaultdict(list)
|
||||
saw_supported_format = False
|
||||
unresolved_targets = 0
|
||||
|
||||
for index, (path_or_dict, strength) in enumerate(lora_configs):
|
||||
if abs(strength) < 1e-5:
|
||||
continue
|
||||
lora_name = str(path_or_dict) if not isinstance(path_or_dict, dict) else f"lora_{index}"
|
||||
lora_state_dict = _load_lora_state_dict(path_or_dict)
|
||||
if not lora_state_dict or not _detect_lora_format(lora_state_dict):
|
||||
logger.warning("Skipping unsupported Qwen LoRA: %s", lora_name)
|
||||
continue
|
||||
saw_supported_format = True
|
||||
|
||||
grouped_weights: Dict[str, Dict[str, torch.Tensor]] = defaultdict(dict)
|
||||
for key, value in lora_state_dict.items():
|
||||
parsed = _classify_and_map_key(key)
|
||||
if parsed is None:
|
||||
continue
|
||||
group, base_key, component, ab = parsed
|
||||
if component and ab:
|
||||
grouped_weights[base_key][f"{component}_{ab}"] = value
|
||||
else:
|
||||
grouped_weights[base_key][ab] = value
|
||||
|
||||
processed_groups: Dict[str, Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]] = {}
|
||||
handled: set[str] = set()
|
||||
for base_key, weights in grouped_weights.items():
|
||||
if base_key in handled:
|
||||
continue
|
||||
a_tensor = b_tensor = alpha = None
|
||||
if "qkv" in base_key or "add_qkv_proj" in base_key:
|
||||
a_tensor, b_tensor, alpha = _fuse_qkv_lora(weights, model=model, base_key=base_key)
|
||||
elif "w1_A" in weights or "w3_A" in weights:
|
||||
a_tensor, b_tensor, alpha = _fuse_glu_lora(weights)
|
||||
elif ".proj_out" in base_key and "single_transformer_blocks" in base_key:
|
||||
split_map, consumed = _handle_proj_out_split(grouped_weights, base_key, model)
|
||||
processed_groups.update(split_map)
|
||||
handled.update(consumed)
|
||||
continue
|
||||
else:
|
||||
a_tensor, b_tensor, alpha = weights.get("A"), weights.get("B"), weights.get("alpha")
|
||||
if a_tensor is not None and b_tensor is not None:
|
||||
processed_groups[base_key] = (a_tensor, b_tensor, alpha)
|
||||
|
||||
for module_name, (a_tensor, b_tensor, alpha) in processed_groups.items():
|
||||
aggregated_weights[module_name].append({
|
||||
"A": a_tensor,
|
||||
"B": b_tensor,
|
||||
"alpha": alpha,
|
||||
"strength": strength,
|
||||
})
|
||||
|
||||
for module_name, weight_list in aggregated_weights.items():
|
||||
resolved_name, module = _resolve_module_name(model, module_name)
|
||||
if module is None:
|
||||
logger.warning("Skipping unresolved Qwen LoRA target: %s", module_name)
|
||||
unresolved_targets += 1
|
||||
continue
|
||||
all_a = []
|
||||
all_b_scaled = []
|
||||
for item in weight_list:
|
||||
a_tensor = item["A"]
|
||||
b_tensor = item["B"]
|
||||
alpha = item["alpha"]
|
||||
strength = float(item["strength"])
|
||||
rank = a_tensor.shape[0]
|
||||
scale = strength * ((alpha / rank) if alpha is not None else 1.0)
|
||||
if module.__class__.__name__ == "AWQW4A16Linear" and hasattr(module, "qweight"):
|
||||
target_dtype = torch.float16
|
||||
target_device = module.qweight.device
|
||||
elif hasattr(module, "proj_down"):
|
||||
target_dtype = module.proj_down.dtype
|
||||
target_device = module.proj_down.device
|
||||
elif hasattr(module, "weight"):
|
||||
target_dtype = module.weight.dtype
|
||||
target_device = module.weight.device
|
||||
else:
|
||||
target_dtype = torch.float16
|
||||
target_device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
all_a.append(a_tensor.to(dtype=target_dtype, device=target_device))
|
||||
all_b_scaled.append((b_tensor * scale).to(dtype=target_dtype, device=target_device))
|
||||
if not all_a:
|
||||
continue
|
||||
_apply_lora_to_module(module, torch.cat(all_a, dim=0), torch.cat(all_b_scaled, dim=1), resolved_name, model)
|
||||
|
||||
slot_count = len(getattr(model, "_lora_slots", {}) or {})
|
||||
logger.info(
|
||||
"Qwen LoRA composition finished: requested=%d supported=%s applied_targets=%d unresolved=%d",
|
||||
len(lora_configs),
|
||||
saw_supported_format,
|
||||
slot_count,
|
||||
unresolved_targets,
|
||||
)
|
||||
return saw_supported_format
|
||||
|
||||
|
||||
class ComfyQwenImageWrapperLM(nn.Module):
|
||||
def __init__(self, model: nn.Module, config=None, apply_awq_mod: bool = True):
|
||||
super().__init__()
|
||||
self.model = model
|
||||
self.config = {} if config is None else config
|
||||
self.dtype = next(model.parameters()).dtype
|
||||
self.loras: List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]] = []
|
||||
self._applied_loras: Optional[List[Tuple[Union[str, Path, Dict[str, torch.Tensor]], float]]] = None
|
||||
self.apply_awq_mod = apply_awq_mod
|
||||
|
||||
def __getattr__(self, name):
|
||||
try:
|
||||
inner = object.__getattribute__(self, "_modules").get("model")
|
||||
except (AttributeError, KeyError):
|
||||
inner = None
|
||||
if inner is None:
|
||||
raise AttributeError(f"{type(self).__name__!s} has no attribute {name}")
|
||||
if name == "model":
|
||||
return inner
|
||||
return getattr(inner, name)
|
||||
|
||||
def process_img(self, *args, **kwargs):
|
||||
return self.model.process_img(*args, **kwargs)
|
||||
|
||||
def _ensure_composed(self):
|
||||
if self._applied_loras != self.loras or (not self.loras and getattr(self.model, "_lora_slots", None)):
|
||||
is_supported_format = compose_loras_v2(self.model, self.loras, apply_awq_mod=self.apply_awq_mod)
|
||||
self._applied_loras = self.loras.copy()
|
||||
has_slots = bool(getattr(self.model, "_lora_slots", None))
|
||||
if self.loras and is_supported_format and not has_slots:
|
||||
logger.warning("Qwen LoRA compose produced 0 target modules. Resetting and retrying once.")
|
||||
reset_lora_v2(self.model)
|
||||
compose_loras_v2(self.model, self.loras, apply_awq_mod=self.apply_awq_mod)
|
||||
has_slots = bool(getattr(self.model, "_lora_slots", None))
|
||||
logger.info("Qwen LoRA retry result: applied_targets=%d", len(getattr(self.model, "_lora_slots", {}) or {}))
|
||||
|
||||
offload_manager = getattr(self.model, "offload_manager", None)
|
||||
if offload_manager is not None:
|
||||
offload_settings = {
|
||||
"num_blocks_on_gpu": getattr(offload_manager, "num_blocks_on_gpu", 1),
|
||||
"use_pin_memory": getattr(offload_manager, "use_pin_memory", False),
|
||||
}
|
||||
logger.info(
|
||||
"Rebuilding Qwen offload manager after LoRA compose: num_blocks_on_gpu=%s use_pin_memory=%s",
|
||||
offload_settings["num_blocks_on_gpu"],
|
||||
offload_settings["use_pin_memory"],
|
||||
)
|
||||
self.model.set_offload(False)
|
||||
self.model.set_offload(True, **offload_settings)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
self._ensure_composed()
|
||||
return self.model(*args, **kwargs)
|
||||
|
||||
|
||||
def _get_qwen_wrapper_and_transformer(model):
|
||||
model_wrapper = model.model.diffusion_model
|
||||
if hasattr(model_wrapper, "model") and hasattr(model_wrapper, "loras"):
|
||||
transformer = model_wrapper.model
|
||||
if transformer.__class__.__name__.endswith("NunchakuQwenImageTransformer2DModel"):
|
||||
return model_wrapper, transformer
|
||||
if model_wrapper.__class__.__name__.endswith("NunchakuQwenImageTransformer2DModel"):
|
||||
wrapped_model = ComfyQwenImageWrapperLM(model_wrapper, getattr(model_wrapper, "config", {}))
|
||||
model.model.diffusion_model = wrapped_model
|
||||
return wrapped_model, wrapped_model.model
|
||||
raise TypeError(f"This LoRA loader only works with Nunchaku Qwen Image models, but got {type(model_wrapper).__name__}.")
|
||||
|
||||
|
||||
def nunchaku_load_qwen_loras(model, lora_configs: List[Tuple[str, float]], apply_awq_mod: bool = True):
|
||||
model_wrapper, transformer = _get_qwen_wrapper_and_transformer(model)
|
||||
model_wrapper.apply_awq_mod = apply_awq_mod
|
||||
|
||||
saved_config = None
|
||||
if hasattr(model, "model") and hasattr(model.model, "model_config"):
|
||||
saved_config = model.model.model_config
|
||||
model.model.model_config = None
|
||||
|
||||
model_wrapper.model = None
|
||||
try:
|
||||
ret_model = copy.deepcopy(model)
|
||||
finally:
|
||||
if saved_config is not None:
|
||||
model.model.model_config = saved_config
|
||||
model_wrapper.model = transformer
|
||||
|
||||
ret_model_wrapper = ret_model.model.diffusion_model
|
||||
if saved_config is not None:
|
||||
ret_model.model.model_config = saved_config
|
||||
ret_model_wrapper.model = transformer
|
||||
ret_model_wrapper.apply_awq_mod = apply_awq_mod
|
||||
ret_model_wrapper.loras = list(getattr(model_wrapper, "loras", []))
|
||||
|
||||
for lora_name, lora_strength in lora_configs:
|
||||
lora_path = lora_name if os.path.isfile(lora_name) else folder_paths.get_full_path("loras", lora_name)
|
||||
if not lora_path or not os.path.isfile(lora_path):
|
||||
logger.warning("Skipping Qwen LoRA '%s' because it could not be found", lora_name)
|
||||
continue
|
||||
ret_model_wrapper.loras.append((lora_path, lora_strength))
|
||||
|
||||
return ret_model
|
||||
@@ -1,15 +1,38 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
import inspect
|
||||
|
||||
from ..services.wildcard_service import (
|
||||
contains_dynamic_syntax,
|
||||
get_wildcard_service,
|
||||
is_trigger_words_input,
|
||||
)
|
||||
|
||||
class _AllContainer:
|
||||
"""Container that accepts any key for dynamic input validation."""
|
||||
|
||||
def __contains__(self, item):
|
||||
return True
|
||||
class _PromptOptionalInputs:
|
||||
"""Lookup that preserves explicit optional inputs and dynamic trigger slots."""
|
||||
|
||||
def __getitem__(self, key):
|
||||
return ("STRING", {"forceInput": True})
|
||||
def __init__(self, explicit_inputs: dict[str, tuple[str, dict[str, Any]]]) -> None:
|
||||
self._explicit_inputs = explicit_inputs
|
||||
|
||||
def __contains__(self, item: object) -> bool:
|
||||
if not isinstance(item, str):
|
||||
return False
|
||||
return item in self._explicit_inputs or is_trigger_words_input(item)
|
||||
|
||||
def __getitem__(self, key: str) -> tuple[str, dict[str, Any]]:
|
||||
if key in self._explicit_inputs:
|
||||
return self._explicit_inputs[key]
|
||||
if is_trigger_words_input(key):
|
||||
return (
|
||||
"STRING",
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": "Trigger words to prepend. Connect to add more inputs.",
|
||||
},
|
||||
)
|
||||
raise KeyError(key)
|
||||
|
||||
|
||||
class PromptLM:
|
||||
@@ -20,12 +43,19 @@ class PromptLM:
|
||||
DESCRIPTION = (
|
||||
"Encodes a text prompt using a CLIP model into an embedding that can be used "
|
||||
"to guide the diffusion model towards generating specific images. "
|
||||
"Supports dynamic trigger words inputs."
|
||||
"Supports dynamic trigger words inputs and runtime wildcard expansion."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
dyn_inputs = {
|
||||
optional_inputs: dict[str, tuple[str, dict[str, Any]]] = {
|
||||
"seed": (
|
||||
"INT",
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": "Optional seed for wildcard generation. Leave unconnected for non-deterministic wildcard expansion.",
|
||||
},
|
||||
),
|
||||
"trigger_words1": (
|
||||
"STRING",
|
||||
{
|
||||
@@ -35,10 +65,9 @@ class PromptLM:
|
||||
),
|
||||
}
|
||||
|
||||
# Bypass validation for dynamic inputs during graph execution
|
||||
stack = inspect.stack()
|
||||
if len(stack) > 2 and stack[2].function == "get_input_info":
|
||||
dyn_inputs = _AllContainer()
|
||||
optional_inputs = _PromptOptionalInputs(optional_inputs) # type: ignore[assignment]
|
||||
|
||||
return {
|
||||
"required": {
|
||||
@@ -46,8 +75,8 @@ class PromptLM:
|
||||
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
|
||||
{
|
||||
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
|
||||
"placeholder": "Enter prompt... /char, /artist for quick tag search",
|
||||
"tooltip": "The text to be encoded.",
|
||||
"placeholder": "Enter prompt... /character, /artist, /wildcard for quick search",
|
||||
"tooltip": "The text to be encoded. Wildcard references inserted with /wildcard are expanded at runtime.",
|
||||
},
|
||||
),
|
||||
"clip": (
|
||||
@@ -55,7 +84,7 @@ class PromptLM:
|
||||
{"tooltip": "The CLIP model used for encoding the text."},
|
||||
),
|
||||
},
|
||||
"optional": dyn_inputs,
|
||||
"optional": optional_inputs,
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING", "STRING")
|
||||
@@ -65,20 +94,39 @@ class PromptLM:
|
||||
)
|
||||
FUNCTION = "encode"
|
||||
|
||||
def encode(self, text: str, clip: Any, **kwargs):
|
||||
# Collect all trigger words from dynamic inputs
|
||||
@classmethod
|
||||
def IS_CHANGED(
|
||||
cls,
|
||||
text: str,
|
||||
clip: Any | None = None,
|
||||
seed: int | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
del clip, kwargs
|
||||
if contains_dynamic_syntax(text) and seed is None:
|
||||
return float("NaN")
|
||||
return False
|
||||
|
||||
def encode(
|
||||
self,
|
||||
text: str,
|
||||
clip: Any,
|
||||
seed: int | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
expanded_text = get_wildcard_service().expand_text(text, seed=seed)
|
||||
|
||||
trigger_words = []
|
||||
for key, value in kwargs.items():
|
||||
if key.startswith("trigger_words") and value:
|
||||
if is_trigger_words_input(key) and value:
|
||||
trigger_words.append(value)
|
||||
|
||||
# Build final prompt
|
||||
if trigger_words:
|
||||
prompt = ", ".join(trigger_words + [text])
|
||||
prompt = ", ".join(trigger_words + [expanded_text])
|
||||
else:
|
||||
prompt = text
|
||||
prompt = expanded_text
|
||||
|
||||
from nodes import CLIPTextEncode # type: ignore
|
||||
|
||||
conditioning = CLIPTextEncode().encode(clip, prompt)[0]
|
||||
return (conditioning, prompt)
|
||||
return (conditioning, prompt)
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import uuid
|
||||
from typing import Any, Dict, Optional
|
||||
import numpy as np
|
||||
import folder_paths # type: ignore
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..metadata_collector.metadata_processor import MetadataProcessor
|
||||
from ..metadata_collector import get_metadata
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH
|
||||
from ..utils.exif_utils import ExifUtils
|
||||
from ..utils.utils import calculate_recipe_fingerprint
|
||||
from PIL import Image, PngImagePlugin
|
||||
import piexif
|
||||
import logging
|
||||
@@ -72,6 +77,13 @@ class SaveImageLM:
|
||||
"tooltip": "Embeds the complete workflow data into the image metadata. Only works with PNG and WebP formats.",
|
||||
},
|
||||
),
|
||||
"save_with_metadata": (
|
||||
"BOOLEAN",
|
||||
{
|
||||
"default": True,
|
||||
"tooltip": "When enabled, embeds generation parameters into the saved image metadata. Disable to skip writing generation metadata.",
|
||||
},
|
||||
),
|
||||
"add_counter_to_filename": (
|
||||
"BOOLEAN",
|
||||
{
|
||||
@@ -79,6 +91,13 @@ class SaveImageLM:
|
||||
"tooltip": "Adds an incremental counter to filenames to prevent overwriting previous images.",
|
||||
},
|
||||
),
|
||||
"save_as_recipe": (
|
||||
"BOOLEAN",
|
||||
{
|
||||
"default": False,
|
||||
"tooltip": "Also saves each generated image as a LoRA Manager recipe.",
|
||||
},
|
||||
),
|
||||
},
|
||||
"hidden": {
|
||||
"id": "UNIQUE_ID",
|
||||
@@ -339,6 +358,203 @@ class SaveImageLM:
|
||||
|
||||
return filename
|
||||
|
||||
@staticmethod
|
||||
def _get_cached_model_by_name(scanner, name):
|
||||
cache = getattr(scanner, "_cache", None)
|
||||
if cache is None or not name:
|
||||
return None
|
||||
|
||||
candidates = [
|
||||
name,
|
||||
os.path.basename(name),
|
||||
os.path.splitext(os.path.basename(name))[0],
|
||||
]
|
||||
for model in getattr(cache, "raw_data", []):
|
||||
file_name = model.get("file_name")
|
||||
if file_name in candidates:
|
||||
return model
|
||||
return None
|
||||
|
||||
def _build_recipe_loras(self, recipe_scanner, lora_stack):
|
||||
lora_matches = re.findall(r"<lora:([^:]+):([^>]+)>", lora_stack or "")
|
||||
lora_scanner = getattr(recipe_scanner, "_lora_scanner", None)
|
||||
loras_data = []
|
||||
base_model_counts = {}
|
||||
|
||||
for name, strength in lora_matches:
|
||||
lora_info = self._get_cached_model_by_name(lora_scanner, name)
|
||||
civitai = (lora_info or {}).get("civitai") or {}
|
||||
civitai_model = civitai.get("model") or {}
|
||||
try:
|
||||
parsed_strength = float(strength)
|
||||
except (TypeError, ValueError):
|
||||
parsed_strength = 1.0
|
||||
|
||||
loras_data.append(
|
||||
{
|
||||
"file_name": name,
|
||||
"strength": parsed_strength,
|
||||
"hash": ((lora_info or {}).get("sha256") or "").lower(),
|
||||
"modelVersionId": civitai.get("id", 0),
|
||||
"modelName": civitai_model.get("name", name) if lora_info else "",
|
||||
"modelVersionName": civitai.get("name", "") if lora_info else "",
|
||||
"isDeleted": False,
|
||||
"exclude": False,
|
||||
}
|
||||
)
|
||||
|
||||
base_model = (lora_info or {}).get("base_model")
|
||||
if base_model:
|
||||
base_model_counts[base_model] = base_model_counts.get(base_model, 0) + 1
|
||||
|
||||
return lora_matches, loras_data, base_model_counts
|
||||
|
||||
def _build_recipe_checkpoint(self, recipe_scanner, checkpoint_raw):
|
||||
if not isinstance(checkpoint_raw, str) or not checkpoint_raw.strip():
|
||||
return None
|
||||
|
||||
checkpoint_name = checkpoint_raw.strip()
|
||||
file_name = os.path.splitext(os.path.basename(checkpoint_name))[0]
|
||||
checkpoint_scanner = getattr(recipe_scanner, "_checkpoint_scanner", None)
|
||||
checkpoint_info = self._get_cached_model_by_name(
|
||||
checkpoint_scanner, checkpoint_name
|
||||
)
|
||||
|
||||
if not checkpoint_info:
|
||||
return {
|
||||
"type": "checkpoint",
|
||||
"name": checkpoint_name,
|
||||
"file_name": file_name,
|
||||
"hash": self.get_checkpoint_hash(checkpoint_name) or "",
|
||||
}
|
||||
|
||||
civitai = checkpoint_info.get("civitai") or {}
|
||||
civitai_model = civitai.get("model") or {}
|
||||
file_path = checkpoint_info.get("file_path") or checkpoint_info.get("path") or ""
|
||||
cached_file_name = (
|
||||
checkpoint_info.get("file_name")
|
||||
or (os.path.splitext(os.path.basename(file_path))[0] if file_path else "")
|
||||
or file_name
|
||||
)
|
||||
|
||||
return {
|
||||
"type": "checkpoint",
|
||||
"modelId": civitai_model.get("id", 0),
|
||||
"modelVersionId": civitai.get("id", 0),
|
||||
"name": civitai_model.get("name")
|
||||
or checkpoint_info.get("model_name")
|
||||
or checkpoint_name,
|
||||
"version": civitai.get("name", ""),
|
||||
"hash": (
|
||||
checkpoint_info.get("sha256") or checkpoint_info.get("hash") or ""
|
||||
).lower(),
|
||||
"file_name": cached_file_name,
|
||||
"modelName": civitai_model.get("name", ""),
|
||||
"modelVersionName": civitai.get("name", ""),
|
||||
"baseModel": checkpoint_info.get("base_model")
|
||||
or civitai.get("baseModel", ""),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _derive_recipe_name(lora_matches):
|
||||
recipe_name_parts = [
|
||||
f"{name.strip()}-{float(strength):.2f}" for name, strength in lora_matches[:3]
|
||||
]
|
||||
return "_".join(recipe_name_parts) or "recipe"
|
||||
|
||||
@staticmethod
|
||||
def _sync_recipe_cache(recipe_scanner, recipe_data, json_path):
|
||||
cache = getattr(recipe_scanner, "_cache", None)
|
||||
if cache is not None:
|
||||
cache.raw_data.append(recipe_data)
|
||||
cache.sorted_by_name = sorted(
|
||||
cache.raw_data, key=lambda item: item.get("title", "").lower()
|
||||
)
|
||||
cache.sorted_by_date = sorted(
|
||||
cache.raw_data,
|
||||
key=lambda item: (
|
||||
item.get("modified", item.get("created_date", 0)),
|
||||
item.get("file_path", ""),
|
||||
),
|
||||
reverse=True,
|
||||
)
|
||||
recipe_scanner._update_folder_metadata(cache)
|
||||
recipe_scanner._update_fts_index_for_recipe(recipe_data, "add")
|
||||
|
||||
recipe_id = str(recipe_data.get("id", ""))
|
||||
if recipe_id:
|
||||
recipe_scanner._json_path_map[recipe_id] = json_path
|
||||
persistent_cache = getattr(recipe_scanner, "_persistent_cache", None)
|
||||
if persistent_cache:
|
||||
persistent_cache.update_recipe(recipe_data, json_path)
|
||||
|
||||
def _save_image_as_recipe(self, file_path, metadata_dict):
|
||||
if not metadata_dict:
|
||||
raise ValueError("No generation metadata found")
|
||||
|
||||
recipe_scanner = ServiceRegistry.get_service_sync("recipe_scanner")
|
||||
if recipe_scanner is None:
|
||||
raise RuntimeError("Recipe scanner unavailable")
|
||||
|
||||
recipes_dir = recipe_scanner.recipes_dir
|
||||
if not recipes_dir:
|
||||
raise RuntimeError("Recipes directory unavailable")
|
||||
os.makedirs(recipes_dir, exist_ok=True)
|
||||
|
||||
recipe_id = str(uuid.uuid4())
|
||||
optimized_image, extension = ExifUtils.optimize_image(
|
||||
image_data=file_path,
|
||||
target_width=CARD_PREVIEW_WIDTH,
|
||||
format="webp",
|
||||
quality=85,
|
||||
preserve_metadata=True,
|
||||
)
|
||||
image_path = os.path.normpath(os.path.join(recipes_dir, f"{recipe_id}{extension}"))
|
||||
with open(image_path, "wb") as file_obj:
|
||||
file_obj.write(optimized_image)
|
||||
|
||||
lora_stack = metadata_dict.get("loras", "")
|
||||
lora_matches, loras_data, base_model_counts = self._build_recipe_loras(
|
||||
recipe_scanner, lora_stack
|
||||
)
|
||||
checkpoint_entry = self._build_recipe_checkpoint(
|
||||
recipe_scanner, metadata_dict.get("checkpoint")
|
||||
)
|
||||
most_common_base_model = (
|
||||
max(base_model_counts.items(), key=lambda item: item[1])[0]
|
||||
if base_model_counts
|
||||
else ""
|
||||
)
|
||||
current_time = time.time()
|
||||
recipe_data = {
|
||||
"id": recipe_id,
|
||||
"file_path": image_path,
|
||||
"title": self._derive_recipe_name(lora_matches),
|
||||
"modified": current_time,
|
||||
"created_date": current_time,
|
||||
"base_model": most_common_base_model
|
||||
or (checkpoint_entry or {}).get("baseModel", ""),
|
||||
"loras": loras_data,
|
||||
"gen_params": {
|
||||
key: value
|
||||
for key, value in metadata_dict.items()
|
||||
if key not in ["checkpoint", "loras"]
|
||||
},
|
||||
"loras_stack": lora_stack,
|
||||
"fingerprint": calculate_recipe_fingerprint(loras_data),
|
||||
}
|
||||
if checkpoint_entry:
|
||||
recipe_data["checkpoint"] = checkpoint_entry
|
||||
|
||||
json_path = os.path.normpath(
|
||||
os.path.join(recipes_dir, f"{recipe_id}.recipe.json")
|
||||
)
|
||||
with open(json_path, "w", encoding="utf-8") as file_obj:
|
||||
json.dump(recipe_data, file_obj, indent=4, ensure_ascii=False)
|
||||
|
||||
ExifUtils.append_recipe_metadata(image_path, recipe_data)
|
||||
self._sync_recipe_cache(recipe_scanner, recipe_data, json_path)
|
||||
|
||||
def save_images(
|
||||
self,
|
||||
images,
|
||||
@@ -350,7 +566,9 @@ class SaveImageLM:
|
||||
lossless_webp=True,
|
||||
quality=100,
|
||||
embed_workflow=False,
|
||||
save_with_metadata=True,
|
||||
add_counter_to_filename=True,
|
||||
save_as_recipe=False,
|
||||
):
|
||||
"""Save images with metadata"""
|
||||
results = []
|
||||
@@ -421,7 +639,7 @@ class SaveImageLM:
|
||||
try:
|
||||
if file_format == "png":
|
||||
assert pnginfo is not None
|
||||
if metadata:
|
||||
if save_with_metadata and metadata:
|
||||
pnginfo.add_text("parameters", metadata)
|
||||
if embed_workflow and extra_pnginfo is not None:
|
||||
workflow_json = json.dumps(extra_pnginfo["workflow"])
|
||||
@@ -430,7 +648,7 @@ class SaveImageLM:
|
||||
img.save(file_path, format="PNG", **save_kwargs)
|
||||
elif file_format == "jpeg":
|
||||
# For JPEG, use piexif
|
||||
if metadata:
|
||||
if save_with_metadata and metadata:
|
||||
try:
|
||||
exif_dict = {
|
||||
"Exif": {
|
||||
@@ -448,7 +666,7 @@ class SaveImageLM:
|
||||
# For WebP, use piexif for metadata
|
||||
exif_dict = {}
|
||||
|
||||
if metadata:
|
||||
if save_with_metadata and metadata:
|
||||
exif_dict["Exif"] = {
|
||||
piexif.ExifIFD.UserComment: b"UNICODE\0"
|
||||
+ metadata.encode("utf-16be")
|
||||
@@ -469,6 +687,14 @@ class SaveImageLM:
|
||||
|
||||
img.save(file_path, format="WEBP", **save_kwargs)
|
||||
|
||||
if save_as_recipe:
|
||||
try:
|
||||
self._save_image_as_recipe(file_path, metadata_dict)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to save image as recipe: %s", e, exc_info=True
|
||||
)
|
||||
|
||||
results.append(
|
||||
{"filename": file, "subfolder": subfolder, "type": self.type}
|
||||
)
|
||||
@@ -489,7 +715,9 @@ class SaveImageLM:
|
||||
lossless_webp=True,
|
||||
quality=100,
|
||||
embed_workflow=False,
|
||||
save_with_metadata=True,
|
||||
add_counter_to_filename=True,
|
||||
save_as_recipe=False,
|
||||
):
|
||||
"""Process and save image with metadata"""
|
||||
# Make sure the output directory exists
|
||||
@@ -516,7 +744,12 @@ class SaveImageLM:
|
||||
lossless_webp,
|
||||
quality,
|
||||
embed_workflow,
|
||||
save_with_metadata,
|
||||
add_counter_to_filename,
|
||||
save_as_recipe,
|
||||
)
|
||||
|
||||
return (images,)
|
||||
return {
|
||||
"result": (images,),
|
||||
"ui": {"images": results},
|
||||
}
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from ..services.wildcard_service import contains_dynamic_syntax, get_wildcard_service
|
||||
|
||||
|
||||
class TextLM:
|
||||
"""A simple text node with autocomplete support."""
|
||||
|
||||
NAME = "Text (LoraManager)"
|
||||
CATEGORY = "Lora Manager/utils"
|
||||
DESCRIPTION = (
|
||||
"A simple text input node with autocomplete support for tags and styles."
|
||||
"A simple text input node with autocomplete support for tags, styles, and wildcard expansion."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
@@ -15,8 +20,17 @@ class TextLM:
|
||||
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
|
||||
{
|
||||
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
|
||||
"placeholder": "Enter text... /char, /artist for quick tag search",
|
||||
"tooltip": "The text output.",
|
||||
"placeholder": "Enter text... /character, /artist, /wildcard for quick search",
|
||||
"tooltip": "The text output. Wildcard references inserted with /wildcard are expanded at runtime.",
|
||||
},
|
||||
),
|
||||
},
|
||||
"optional": {
|
||||
"seed": (
|
||||
"INT",
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": "Optional seed for wildcard generation. Leave unconnected for non-deterministic wildcard expansion.",
|
||||
},
|
||||
),
|
||||
},
|
||||
@@ -24,10 +38,14 @@ class TextLM:
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("STRING",)
|
||||
OUTPUT_TOOLTIPS = (
|
||||
"The text output.",
|
||||
)
|
||||
OUTPUT_TOOLTIPS = ("The text output.",)
|
||||
FUNCTION = "process"
|
||||
|
||||
def process(self, text: str):
|
||||
return (text,)
|
||||
@classmethod
|
||||
def IS_CHANGED(cls, text: str, seed: int | None = None):
|
||||
if contains_dynamic_syntax(text) and seed is None:
|
||||
return float("NaN")
|
||||
return False
|
||||
|
||||
def process(self, text: str, seed: int | None = None):
|
||||
return (get_wildcard_service().expand_text(text, seed=seed),)
|
||||
|
||||
@@ -76,6 +76,9 @@ class TriggerWordToggleLM:
|
||||
# Filter out empty strings and return as set
|
||||
return set(word for word in words if word)
|
||||
|
||||
def _group_has_child_items(self, item):
|
||||
return isinstance(item, dict) and isinstance(item.get("items"), list)
|
||||
|
||||
def process_trigger_words(
|
||||
self,
|
||||
id,
|
||||
@@ -112,7 +115,11 @@ class TriggerWordToggleLM:
|
||||
|
||||
if isinstance(trigger_data, list):
|
||||
if group_mode:
|
||||
if allow_strength_adjustment:
|
||||
if any(self._group_has_child_items(item) for item in trigger_data):
|
||||
filtered_groups = self._process_group_items(
|
||||
trigger_data, allow_strength_adjustment
|
||||
)
|
||||
elif allow_strength_adjustment:
|
||||
parsed_items = [
|
||||
self._parse_trigger_item(
|
||||
item, allow_strength_adjustment
|
||||
@@ -174,6 +181,41 @@ class TriggerWordToggleLM:
|
||||
|
||||
return (filtered_triggers,)
|
||||
|
||||
def _process_group_items(self, trigger_data, allow_strength_adjustment):
|
||||
filtered_groups = []
|
||||
|
||||
for item in trigger_data:
|
||||
group = self._parse_trigger_item(item, allow_strength_adjustment)
|
||||
if not group["text"] or not group["active"]:
|
||||
continue
|
||||
|
||||
raw_items = item.get("items") if isinstance(item, dict) else None
|
||||
if isinstance(raw_items, list):
|
||||
active_items = []
|
||||
for raw_item in raw_items:
|
||||
child = self._parse_trigger_item(
|
||||
raw_item, allow_strength_adjustment=False
|
||||
)
|
||||
if child["text"] and child["active"]:
|
||||
active_items.append(child["text"])
|
||||
|
||||
if not active_items:
|
||||
continue
|
||||
|
||||
group_text = ", ".join(active_items)
|
||||
else:
|
||||
group_text = group["text"]
|
||||
|
||||
filtered_groups.append(
|
||||
self._format_word_output(
|
||||
group_text,
|
||||
group["strength"],
|
||||
allow_strength_adjustment,
|
||||
)
|
||||
)
|
||||
|
||||
return filtered_groups
|
||||
|
||||
def _parse_trigger_item(self, item, allow_strength_adjustment):
|
||||
text = (item.get("text") or "").strip()
|
||||
active = bool(item.get("active", False))
|
||||
|
||||
@@ -158,3 +158,24 @@ def nunchaku_load_lora(model, lora_name, lora_strength):
|
||||
ret_model.model.model_config.unet_config["in_channels"] = new_in_channels
|
||||
|
||||
return ret_model
|
||||
|
||||
|
||||
def detect_nunchaku_model_kind(model):
|
||||
"""Return the supported Nunchaku model kind for a Comfy model, if any."""
|
||||
try:
|
||||
model_wrapper = model.model.diffusion_model
|
||||
except (AttributeError, TypeError):
|
||||
return None
|
||||
|
||||
wrapper_name = model_wrapper.__class__.__name__
|
||||
if wrapper_name == "ComfyFluxWrapper":
|
||||
return "flux"
|
||||
|
||||
inner_model = getattr(model_wrapper, "model", None)
|
||||
inner_name = inner_model.__class__.__name__ if inner_model is not None else ""
|
||||
if wrapper_name.endswith("NunchakuQwenImageTransformer2DModel"):
|
||||
return "qwen_image"
|
||||
if inner_name.endswith("NunchakuQwenImageTransformer2DModel"):
|
||||
return "qwen_image"
|
||||
|
||||
return None
|
||||
|
||||
@@ -1,10 +1,22 @@
|
||||
import folder_paths # type: ignore
|
||||
from ..utils.utils import get_lora_info
|
||||
import os
|
||||
from ..utils.utils import get_lora_info_absolute
|
||||
from ..config import config
|
||||
from .utils import FlexibleOptionalInputType, any_type, get_loras_list
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _relpath_within_loras(abs_path):
|
||||
"""Return abs_path relative to the first matching lora root, or basename as fallback."""
|
||||
all_roots = list(config.loras_roots or []) + list(config.extra_loras_roots or [])
|
||||
for root in all_roots:
|
||||
try:
|
||||
return os.path.relpath(abs_path, root)
|
||||
except ValueError:
|
||||
continue
|
||||
return os.path.basename(abs_path)
|
||||
|
||||
class WanVideoLoraSelectLM:
|
||||
NAME = "WanVideo Lora Select (LoraManager)"
|
||||
CATEGORY = "Lora Manager/stackers"
|
||||
@@ -56,13 +68,13 @@ class WanVideoLoraSelectLM:
|
||||
clip_strength = float(lora.get('clipStrength', model_strength))
|
||||
|
||||
# Get lora path and trigger words
|
||||
lora_path, trigger_words = get_lora_info(lora_name)
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora_name)
|
||||
|
||||
# Create lora item for WanVideo format
|
||||
lora_item = {
|
||||
"path": folder_paths.get_full_path("loras", lora_path),
|
||||
"path": lora_path,
|
||||
"strength": model_strength,
|
||||
"name": lora_path.split(".")[0],
|
||||
"name": os.path.splitext(_relpath_within_loras(lora_path))[0],
|
||||
"blocks": selected_blocks,
|
||||
"layer_filter": layer_filter,
|
||||
"low_mem_load": low_mem_load,
|
||||
|
||||
@@ -1,11 +1,23 @@
|
||||
import folder_paths # type: ignore
|
||||
from ..utils.utils import get_lora_info
|
||||
import os
|
||||
from ..utils.utils import get_lora_info_absolute
|
||||
from ..config import config
|
||||
from .utils import any_type
|
||||
import logging
|
||||
|
||||
# 初始化日志记录器
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _relpath_within_loras(abs_path):
|
||||
"""Return abs_path relative to the first matching lora root, or basename as fallback."""
|
||||
all_roots = list(config.loras_roots or []) + list(config.extra_loras_roots or [])
|
||||
for root in all_roots:
|
||||
try:
|
||||
return os.path.relpath(abs_path, root)
|
||||
except ValueError:
|
||||
continue
|
||||
return os.path.basename(abs_path)
|
||||
|
||||
# 定义新节点的类
|
||||
class WanVideoLoraTextSelectLM:
|
||||
# 节点在UI中显示的名称
|
||||
@@ -87,12 +99,12 @@ class WanVideoLoraTextSelectLM:
|
||||
else:
|
||||
continue
|
||||
|
||||
lora_path, trigger_words = get_lora_info(lora_name_raw)
|
||||
lora_path, trigger_words = get_lora_info_absolute(lora_name_raw)
|
||||
|
||||
lora_item = {
|
||||
"path": folder_paths.get_full_path("loras", lora_path),
|
||||
"path": lora_path,
|
||||
"strength": model_strength,
|
||||
"name": lora_path.split(".")[0],
|
||||
"name": os.path.splitext(_relpath_within_loras(lora_path))[0],
|
||||
"blocks": selected_blocks,
|
||||
"layer_filter": layer_filter,
|
||||
"low_mem_load": low_mem_load,
|
||||
|
||||
@@ -13,4 +13,5 @@ GEN_PARAM_KEYS = [
|
||||
'seed',
|
||||
'size',
|
||||
'clip_skip',
|
||||
'denoising_strength',
|
||||
]
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import logging
|
||||
import json
|
||||
import re
|
||||
import os
|
||||
from typing import Any, Dict, Optional
|
||||
from .merger import GenParamsMerger
|
||||
from .base import RecipeMetadataParser
|
||||
from ..services.metadata_service import get_default_metadata_provider
|
||||
from ..utils.civitai_utils import extract_civitai_image_id
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -39,11 +39,12 @@ class RecipeEnricher:
|
||||
source_url = recipe.get("source_url") or recipe.get("source_path", "")
|
||||
|
||||
# Check if it's a Civitai image URL
|
||||
image_id_match = re.search(r'civitai\.com/images/(\d+)', str(source_url))
|
||||
if image_id_match:
|
||||
image_id = image_id_match.group(1)
|
||||
image_id = extract_civitai_image_id(str(source_url))
|
||||
if image_id:
|
||||
try:
|
||||
image_info = await civitai_client.get_image_info(image_id)
|
||||
image_info = await civitai_client.get_image_info(
|
||||
image_id, source_url=str(source_url)
|
||||
)
|
||||
if image_info:
|
||||
# Handle nested meta often found in Civitai API responses
|
||||
raw_meta = image_info.get("meta")
|
||||
|
||||
@@ -1,27 +1,33 @@
|
||||
from typing import Any, Dict, Optional
|
||||
import logging
|
||||
|
||||
from .constants import GEN_PARAM_KEYS
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class GenParamsMerger:
|
||||
"""Utility to merge generation parameters from multiple sources with priority."""
|
||||
|
||||
ALLOWED_KEYS = set(GEN_PARAM_KEYS)
|
||||
|
||||
BLACKLISTED_KEYS = {
|
||||
"id", "url", "userId", "username", "createdAt", "updatedAt", "hash", "meta",
|
||||
"draft", "extra", "width", "height", "process", "quantity", "workflow",
|
||||
"baseModel", "resources", "disablePoi", "aspectRatio", "Created Date",
|
||||
"experimental", "civitaiResources", "civitai_resources", "Civitai resources",
|
||||
"modelVersionId", "modelId", "hashes", "Model", "Model hash", "checkpoint_hash",
|
||||
"checkpoint", "checksum", "model_checksum"
|
||||
"checkpoint", "checksum", "model_checksum", "raw_metadata",
|
||||
}
|
||||
|
||||
|
||||
NORMALIZATION_MAPPING = {
|
||||
# Civitai specific
|
||||
"cfg": "cfg_scale",
|
||||
"cfgScale": "cfg_scale",
|
||||
"clipSkip": "clip_skip",
|
||||
"negativePrompt": "negative_prompt",
|
||||
# Case variations
|
||||
"Sampler": "sampler",
|
||||
"sampler_name": "sampler",
|
||||
"scheduler": "sampler",
|
||||
"Steps": "steps",
|
||||
"Seed": "seed",
|
||||
"Size": "size",
|
||||
@@ -36,63 +42,40 @@ class GenParamsMerger:
|
||||
def merge(
|
||||
request_params: Optional[Dict[str, Any]] = None,
|
||||
civitai_meta: Optional[Dict[str, Any]] = None,
|
||||
embedded_metadata: Optional[Dict[str, Any]] = None
|
||||
embedded_metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Merge generation parameters from three sources.
|
||||
|
||||
Priority: request_params > civitai_meta > embedded_metadata
|
||||
|
||||
Args:
|
||||
request_params: Params provided directly in the import request
|
||||
civitai_meta: Params from Civitai Image API 'meta' field
|
||||
embedded_metadata: Params extracted from image EXIF/embedded metadata
|
||||
|
||||
Returns:
|
||||
Merged parameters dictionary
|
||||
"""
|
||||
result = {}
|
||||
|
||||
# 1. Start with embedded metadata (lowest priority)
|
||||
Priority: request_params > civitai_meta > embedded_metadata
|
||||
"""
|
||||
result: Dict[str, Any] = {}
|
||||
|
||||
if embedded_metadata:
|
||||
# If it's a full recipe metadata, we use its gen_params
|
||||
if "gen_params" in embedded_metadata and isinstance(embedded_metadata["gen_params"], dict):
|
||||
if "gen_params" in embedded_metadata and isinstance(
|
||||
embedded_metadata["gen_params"], dict
|
||||
):
|
||||
GenParamsMerger._update_normalized(result, embedded_metadata["gen_params"])
|
||||
else:
|
||||
# Otherwise assume the dict itself contains gen_params
|
||||
GenParamsMerger._update_normalized(result, embedded_metadata)
|
||||
|
||||
# 2. Layer Civitai meta (medium priority)
|
||||
if civitai_meta:
|
||||
GenParamsMerger._update_normalized(result, civitai_meta)
|
||||
|
||||
# 3. Layer request params (highest priority)
|
||||
if request_params:
|
||||
GenParamsMerger._update_normalized(result, request_params)
|
||||
|
||||
# Filter out blacklisted keys and also the original camelCase keys if they were normalized
|
||||
final_result = {}
|
||||
for k, v in result.items():
|
||||
if k in GenParamsMerger.BLACKLISTED_KEYS:
|
||||
continue
|
||||
if k in GenParamsMerger.NORMALIZATION_MAPPING:
|
||||
continue
|
||||
final_result[k] = v
|
||||
|
||||
return final_result
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def _update_normalized(target: Dict[str, Any], source: Dict[str, Any]) -> None:
|
||||
"""Update target dict with normalized keys from source."""
|
||||
for k, v in source.items():
|
||||
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(k, k)
|
||||
target[normalized_key] = v
|
||||
# Also keep the original key for now if it's not the same,
|
||||
# so we can filter at the end or avoid losing it if it wasn't supposed to be renamed?
|
||||
# Actually, if we rename it, we should probably NOT keep both in 'target'
|
||||
# because we want to filter them out at the end anyway.
|
||||
if normalized_key != k:
|
||||
# If we are overwriting an existing snake_case key with a camelCase one's value,
|
||||
# that's fine because of the priority order of calls to _update_normalized.
|
||||
pass
|
||||
target[k] = v
|
||||
"""Update target dict with normalized, persistence-safe keys from source."""
|
||||
for key, value in source.items():
|
||||
if key in GenParamsMerger.BLACKLISTED_KEYS:
|
||||
continue
|
||||
|
||||
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(key, key)
|
||||
if normalized_key not in GenParamsMerger.ALLOWED_KEYS:
|
||||
continue
|
||||
|
||||
target[normalized_key] = value
|
||||
|
||||
@@ -42,6 +42,7 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
|
||||
"height",
|
||||
"Model",
|
||||
"Model hash",
|
||||
"modelVersionIds",
|
||||
)
|
||||
return any(key in payload for key in civitai_image_fields)
|
||||
|
||||
@@ -429,6 +430,65 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
|
||||
|
||||
result["loras"].append(lora_entry)
|
||||
|
||||
# Process modelVersionIds from Civitai image API
|
||||
# These are model version IDs returned at root level when meta doesn't contain resources
|
||||
if "modelVersionIds" in metadata and isinstance(
|
||||
metadata["modelVersionIds"], list
|
||||
):
|
||||
for version_id in metadata["modelVersionIds"]:
|
||||
version_id_str = str(version_id)
|
||||
|
||||
# Skip if we've already added this LoRA by version ID
|
||||
if version_id_str in added_loras:
|
||||
continue
|
||||
|
||||
# Initialize lora entry with version ID
|
||||
lora_entry = {
|
||||
"id": version_id,
|
||||
"modelId": 0,
|
||||
"name": "Unknown LoRA",
|
||||
"version": "",
|
||||
"type": "lora",
|
||||
"weight": 1.0,
|
||||
"existsLocally": False,
|
||||
"thumbnailUrl": "/loras_static/images/no-preview.png",
|
||||
"baseModel": "",
|
||||
"size": 0,
|
||||
"downloadUrl": "",
|
||||
"isDeleted": False,
|
||||
}
|
||||
|
||||
# Fetch model info from Civitai
|
||||
if metadata_provider and version_id_str:
|
||||
try:
|
||||
civitai_info = (
|
||||
await metadata_provider.get_model_version_info(
|
||||
version_id_str
|
||||
)
|
||||
)
|
||||
|
||||
populated_entry = await self.populate_lora_from_civitai(
|
||||
lora_entry,
|
||||
civitai_info,
|
||||
recipe_scanner,
|
||||
base_model_counts,
|
||||
)
|
||||
|
||||
if populated_entry is None:
|
||||
continue # Skip invalid LoRA types
|
||||
|
||||
lora_entry = populated_entry
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error fetching Civitai info for model version {version_id}: {e}"
|
||||
)
|
||||
|
||||
# Track this LoRA for deduplication
|
||||
if version_id_str:
|
||||
added_loras[version_id_str] = len(result["loras"])
|
||||
|
||||
result["loras"].append(lora_entry)
|
||||
|
||||
# If we found LoRA hashes in the metadata but haven't already
|
||||
# populated entries for them, fall back to creating LoRAs from
|
||||
# the hashes section. Some Civitai image responses only include
|
||||
|
||||
@@ -251,7 +251,7 @@ class BaseModelRoutes(ABC):
|
||||
|
||||
def _find_model_file(self, files):
|
||||
"""Find the appropriate model file from the files list - can be overridden by subclasses."""
|
||||
return next((file for file in files if file.get("type") == "Model" and file.get("primary") is True), None)
|
||||
return next((file for file in files if file.get("type") in ("Model", "Diffusion Model") and file.get("primary") is True), None)
|
||||
|
||||
def get_handler(self, name: str) -> Callable[[web.Request], web.StreamResponse]:
|
||||
"""Expose handlers for subclasses or tests."""
|
||||
|
||||
141
py/routes/handlers/base_model_handlers.py
Normal file
141
py/routes/handlers/base_model_handlers.py
Normal file
@@ -0,0 +1,141 @@
|
||||
"""Handlers for base model related endpoints."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, Awaitable, Callable, Dict
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
from ...services.civitai_base_model_service import get_civitai_base_model_service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseModelHandlerSet:
|
||||
"""Collection of handlers for base model operations."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_model_service_factory: Callable[[], Any] = get_civitai_base_model_service,
|
||||
) -> None:
|
||||
self._base_model_service_factory = base_model_service_factory
|
||||
|
||||
def to_route_mapping(
|
||||
self,
|
||||
) -> Dict[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
|
||||
"""Return mapping of route names to handler methods."""
|
||||
return {
|
||||
"get_base_models": self.get_base_models,
|
||||
"refresh_base_models": self.refresh_base_models,
|
||||
"get_base_model_categories": self.get_base_model_categories,
|
||||
"get_base_model_cache_status": self.get_base_model_cache_status,
|
||||
}
|
||||
|
||||
async def get_base_models(self, request: web.Request) -> web.Response:
|
||||
"""Get merged base models (hardcoded + remote from Civitai).
|
||||
|
||||
Query Parameters:
|
||||
refresh: If 'true', force refresh from API
|
||||
|
||||
Returns:
|
||||
JSON response with:
|
||||
- models: List of base model names
|
||||
- source: 'cache', 'api', or 'fallback'
|
||||
- last_updated: ISO timestamp
|
||||
- counts: hardcoded_count, remote_count, merged_count
|
||||
"""
|
||||
try:
|
||||
service = await self._base_model_service_factory()
|
||||
|
||||
# Check for refresh parameter
|
||||
force_refresh = request.query.get("refresh", "").lower() == "true"
|
||||
|
||||
result = await service.get_base_models(force_refresh=force_refresh)
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"success": True,
|
||||
"data": result,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_base_models: {e}")
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(e)},
|
||||
status=500,
|
||||
)
|
||||
|
||||
async def refresh_base_models(self, request: web.Request) -> web.Response:
|
||||
"""Force refresh base models from Civitai API.
|
||||
|
||||
Returns:
|
||||
JSON response with refreshed data
|
||||
"""
|
||||
try:
|
||||
service = await self._base_model_service_factory()
|
||||
result = await service.refresh_cache()
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"success": True,
|
||||
"data": result,
|
||||
"message": "Base models cache refreshed successfully",
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in refresh_base_models: {e}")
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(e)},
|
||||
status=500,
|
||||
)
|
||||
|
||||
async def get_base_model_categories(self, request: web.Request) -> web.Response:
|
||||
"""Get categorized base models.
|
||||
|
||||
Returns:
|
||||
JSON response with categorized models
|
||||
"""
|
||||
try:
|
||||
service = await self._base_model_service_factory()
|
||||
categories = service.get_model_categories()
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"success": True,
|
||||
"data": categories,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_base_model_categories: {e}")
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(e)},
|
||||
status=500,
|
||||
)
|
||||
|
||||
async def get_base_model_cache_status(self, request: web.Request) -> web.Response:
|
||||
"""Get cache status for base models.
|
||||
|
||||
Returns:
|
||||
JSON response with cache status
|
||||
"""
|
||||
try:
|
||||
service = await self._base_model_service_factory()
|
||||
status = service.get_cache_status()
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"success": True,
|
||||
"data": status,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_base_model_cache_status: {e}")
|
||||
return web.json_response(
|
||||
{"success": False, "error": str(e)},
|
||||
status=500,
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,9 +16,14 @@ import jinja2
|
||||
|
||||
from ...config import config
|
||||
from ...services.download_coordinator import DownloadCoordinator
|
||||
from ...services.connectivity_guard import (
|
||||
OFFLINE_FRIENDLY_MESSAGE,
|
||||
is_expected_offline_error,
|
||||
)
|
||||
from ...services.metadata_sync_service import MetadataSyncService
|
||||
from ...services.model_file_service import ModelMoveService
|
||||
from ...services.preview_asset_service import PreviewAssetService
|
||||
from ...services.service_registry import ServiceRegistry
|
||||
from ...services.settings_manager import SettingsManager, get_settings_manager
|
||||
from ...services.tag_update_service import TagUpdateService
|
||||
from ...services.use_cases import (
|
||||
@@ -64,7 +69,6 @@ class ModelPageView:
|
||||
self._settings = settings_service
|
||||
self._server_i18n = server_i18n
|
||||
self._logger = logger
|
||||
self._app_version = self._get_app_version()
|
||||
|
||||
def _load_supporters(self) -> dict:
|
||||
"""Load supporters data from JSON file."""
|
||||
@@ -155,7 +159,7 @@ class ModelPageView:
|
||||
"request": request,
|
||||
"folders": [],
|
||||
"t": self._server_i18n.get_translation,
|
||||
"version": self._app_version,
|
||||
"version": self._get_app_version(),
|
||||
}
|
||||
|
||||
if not is_initializing:
|
||||
@@ -224,6 +228,42 @@ class ModelListingHandler:
|
||||
)
|
||||
return web.json_response({"error": str(exc)}, status=500)
|
||||
|
||||
async def get_excluded_models(self, request: web.Request) -> web.Response:
|
||||
start_time = time.perf_counter()
|
||||
try:
|
||||
params = self._parse_common_params(request)
|
||||
result = await self._service.get_excluded_paginated_data(**params)
|
||||
|
||||
format_start = time.perf_counter()
|
||||
formatted_result = {
|
||||
"items": [
|
||||
await self._service.format_response(item)
|
||||
for item in result["items"]
|
||||
],
|
||||
"total": result["total"],
|
||||
"page": result["page"],
|
||||
"page_size": result["page_size"],
|
||||
"total_pages": result["total_pages"],
|
||||
}
|
||||
format_duration = time.perf_counter() - format_start
|
||||
|
||||
duration = time.perf_counter() - start_time
|
||||
self._logger.debug(
|
||||
"Request for %s/excluded took %.3fs (formatting: %.3fs)",
|
||||
self._service.model_type,
|
||||
duration,
|
||||
format_duration,
|
||||
)
|
||||
return web.json_response(formatted_result)
|
||||
except Exception as exc:
|
||||
self._logger.error(
|
||||
"Error retrieving excluded %ss: %s",
|
||||
self._service.model_type,
|
||||
exc,
|
||||
exc_info=True,
|
||||
)
|
||||
return web.json_response({"error": str(exc)}, status=500)
|
||||
|
||||
def _parse_common_params(self, request: web.Request) -> Dict:
|
||||
page = int(request.query.get("page", "1"))
|
||||
page_size = min(int(request.query.get("page_size", "20")), 100)
|
||||
@@ -392,6 +432,21 @@ class ModelManagementHandler:
|
||||
self._logger.error("Error excluding model: %s", exc, exc_info=True)
|
||||
return web.Response(text=str(exc), status=500)
|
||||
|
||||
async def unexclude_model(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
data = await request.json()
|
||||
file_path = data.get("file_path")
|
||||
if not file_path:
|
||||
return web.Response(text="Model path is required", status=400)
|
||||
|
||||
result = await self._lifecycle_service.unexclude_model(file_path)
|
||||
return web.json_response(result)
|
||||
except ValueError as exc:
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=400)
|
||||
except Exception as exc:
|
||||
self._logger.error("Error restoring model: %s", exc, exc_info=True)
|
||||
return web.Response(text=str(exc), status=500)
|
||||
|
||||
async def fetch_civitai(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
data = await request.json()
|
||||
@@ -453,6 +508,11 @@ class ModelManagementHandler:
|
||||
formatted_metadata = await self._service.format_response(model_data)
|
||||
return web.json_response({"success": True, "metadata": formatted_metadata})
|
||||
except Exception as exc:
|
||||
if is_expected_offline_error(str(exc)):
|
||||
return web.json_response(
|
||||
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
|
||||
status=503,
|
||||
)
|
||||
self._logger.error("Error fetching from CivitAI: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
@@ -499,6 +559,11 @@ class ModelManagementHandler:
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
if is_expected_offline_error(str(exc)):
|
||||
return web.json_response(
|
||||
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
|
||||
status=503,
|
||||
)
|
||||
self._logger.error("Error re-linking to CivitAI: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
@@ -859,7 +924,7 @@ class ModelQueryHandler:
|
||||
async def get_base_models(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
limit = int(request.query.get("limit", "20"))
|
||||
if limit < 1 or limit > 100:
|
||||
if limit < 0 or limit > 100:
|
||||
limit = 20
|
||||
base_models = await self._service.get_base_models(limit)
|
||||
return web.json_response({"success": True, "base_models": base_models})
|
||||
@@ -1532,6 +1597,20 @@ class ModelCivitaiHandler:
|
||||
|
||||
cache = await self._service.scanner.get_cached_data()
|
||||
version_index = cache.version_index
|
||||
downloaded_version_ids: set[int] = set()
|
||||
try:
|
||||
history_service = await ServiceRegistry.get_downloaded_version_history_service()
|
||||
downloaded_version_ids = set(
|
||||
await history_service.get_downloaded_version_ids(
|
||||
self._service.model_type,
|
||||
model_id,
|
||||
)
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
self._logger.debug(
|
||||
"Failed to load download history for CivitAI versions: %s",
|
||||
exc,
|
||||
)
|
||||
|
||||
for version in versions:
|
||||
version_id = None
|
||||
@@ -1548,6 +1627,9 @@ class ModelCivitaiHandler:
|
||||
else None
|
||||
)
|
||||
version["existsLocally"] = cache_entry is not None
|
||||
version["hasBeenDownloaded"] = (
|
||||
version_id in downloaded_version_ids if version_id is not None else False
|
||||
)
|
||||
if cache_entry and isinstance(cache_entry, Mapping):
|
||||
local_path = cache_entry.get("file_path")
|
||||
if local_path:
|
||||
@@ -1790,6 +1872,11 @@ class ModelUpdateHandler:
|
||||
status=429,
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive log
|
||||
if is_expected_offline_error(str(exc)):
|
||||
return web.json_response(
|
||||
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
|
||||
status=503,
|
||||
)
|
||||
self._logger.error("Failed to fetch license info: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
@@ -1878,9 +1965,12 @@ class ModelUpdateHandler:
|
||||
{"success": False, "error": str(exc) or "Rate limited"}, status=429
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
self._logger.error(
|
||||
"Failed to refresh model updates: %s", exc, exc_info=True
|
||||
)
|
||||
if is_expected_offline_error(str(exc)):
|
||||
return web.json_response(
|
||||
{"success": False, "error": OFFLINE_FRIENDLY_MESSAGE},
|
||||
status=503,
|
||||
)
|
||||
self._logger.error("Failed to refresh model updates: %s", exc, exc_info=True)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
serialized_records = []
|
||||
@@ -2266,7 +2356,7 @@ class ModelUpdateHandler:
|
||||
self,
|
||||
record,
|
||||
*,
|
||||
version_context: Optional[Dict[int, Dict[str, Optional[str]]]] = None,
|
||||
version_context: Optional[Dict[int, Dict[str, Any]]] = None,
|
||||
) -> Dict:
|
||||
context = version_context or {}
|
||||
# Check user setting for hiding early access versions
|
||||
@@ -2295,7 +2385,7 @@ class ModelUpdateHandler:
|
||||
|
||||
@staticmethod
|
||||
def _serialize_version(
|
||||
version, context: Optional[Dict[str, Optional[str]]]
|
||||
version, context: Optional[Dict[str, Any]]
|
||||
) -> Dict:
|
||||
context = context or {}
|
||||
preview_override = context.get("preview_override")
|
||||
@@ -2329,17 +2419,42 @@ class ModelUpdateHandler:
|
||||
"sizeBytes": version.size_bytes,
|
||||
"previewUrl": preview_url,
|
||||
"isInLibrary": version.is_in_library,
|
||||
"hasBeenDownloaded": bool(context.get("has_been_downloaded", False)),
|
||||
"shouldIgnore": version.should_ignore,
|
||||
"earlyAccessEndsAt": version.early_access_ends_at,
|
||||
"isEarlyAccess": is_early_access,
|
||||
"usageControl": version.usage_control,
|
||||
"filePath": context.get("file_path"),
|
||||
"fileName": context.get("file_name"),
|
||||
}
|
||||
|
||||
async def _build_version_context(
|
||||
self, record
|
||||
) -> Dict[int, Dict[str, Optional[str]]]:
|
||||
context: Dict[int, Dict[str, Optional[str]]] = {}
|
||||
) -> Dict[int, Dict[str, Any]]:
|
||||
context: Dict[int, Dict[str, Any]] = {}
|
||||
downloaded_version_ids: set[int] = set()
|
||||
try:
|
||||
history_service = await ServiceRegistry.get_downloaded_version_history_service()
|
||||
downloaded_version_ids = set(
|
||||
await history_service.get_downloaded_version_ids(
|
||||
record.model_type,
|
||||
record.model_id,
|
||||
)
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
self._logger.debug(
|
||||
"Failed to load download history while building version context: %s",
|
||||
exc,
|
||||
)
|
||||
|
||||
for version in record.versions:
|
||||
context[version.version_id] = {
|
||||
"file_path": None,
|
||||
"file_name": None,
|
||||
"preview_override": None,
|
||||
"has_been_downloaded": version.version_id in downloaded_version_ids,
|
||||
}
|
||||
|
||||
try:
|
||||
cache = await self._service.scanner.get_cached_data()
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
@@ -2358,16 +2473,21 @@ class ModelUpdateHandler:
|
||||
cache_entry = version_index.get(version.version_id)
|
||||
if isinstance(cache_entry, Mapping):
|
||||
preview = cache_entry.get("preview_url")
|
||||
context_entry: Dict[str, Optional[str]] = {
|
||||
"file_path": cache_entry.get("file_path"),
|
||||
"file_name": cache_entry.get("file_name"),
|
||||
"preview_override": None,
|
||||
}
|
||||
context_entry = context.setdefault(
|
||||
version.version_id,
|
||||
{
|
||||
"file_path": None,
|
||||
"file_name": None,
|
||||
"preview_override": None,
|
||||
"has_been_downloaded": version.version_id in downloaded_version_ids,
|
||||
},
|
||||
)
|
||||
context_entry["file_path"] = cache_entry.get("file_path")
|
||||
context_entry["file_name"] = cache_entry.get("file_name")
|
||||
if isinstance(preview, str) and preview:
|
||||
context_entry["preview_override"] = config.get_preview_static_url(
|
||||
preview
|
||||
)
|
||||
context[version.version_id] = context_entry
|
||||
return context
|
||||
|
||||
|
||||
@@ -2391,8 +2511,10 @@ class ModelHandlerSet:
|
||||
return {
|
||||
"handle_models_page": self.page_view.handle,
|
||||
"get_models": self.listing.get_models,
|
||||
"get_excluded_models": self.listing.get_excluded_models,
|
||||
"delete_model": self.management.delete_model,
|
||||
"exclude_model": self.management.exclude_model,
|
||||
"unexclude_model": self.management.unexclude_model,
|
||||
"fetch_civitai": self.management.fetch_civitai,
|
||||
"fetch_all_civitai": self.civitai.fetch_all_civitai,
|
||||
"relink_civitai": self.management.relink_civitai,
|
||||
|
||||
@@ -26,7 +26,7 @@ from ...services.recipes import (
|
||||
RecipeValidationError,
|
||||
)
|
||||
from ...services.metadata_service import get_default_metadata_provider
|
||||
from ...utils.civitai_utils import rewrite_preview_url
|
||||
from ...utils.civitai_utils import extract_civitai_image_id, rewrite_preview_url
|
||||
from ...utils.exif_utils import ExifUtils
|
||||
from ...recipes.merger import GenParamsMerger
|
||||
from ...recipes.enrichment import RecipeEnricher
|
||||
@@ -81,6 +81,7 @@ class RecipeHandlerSet:
|
||||
"bulk_delete": self.management.bulk_delete,
|
||||
"save_recipe_from_widget": self.management.save_recipe_from_widget,
|
||||
"get_recipes_for_lora": self.query.get_recipes_for_lora,
|
||||
"get_recipes_for_checkpoint": self.query.get_recipes_for_checkpoint,
|
||||
"scan_recipes": self.query.scan_recipes,
|
||||
"move_recipe": self.management.move_recipe,
|
||||
"repair_recipes": self.management.repair_recipes,
|
||||
@@ -218,6 +219,7 @@ class RecipeListingHandler:
|
||||
filters["tags"] = tag_filters
|
||||
|
||||
lora_hash = request.query.get("lora_hash")
|
||||
checkpoint_hash = request.query.get("checkpoint_hash")
|
||||
|
||||
result = await recipe_scanner.get_paginated_data(
|
||||
page=page,
|
||||
@@ -227,6 +229,7 @@ class RecipeListingHandler:
|
||||
filters=filters,
|
||||
search_options=search_options,
|
||||
lora_hash=lora_hash,
|
||||
checkpoint_hash=checkpoint_hash,
|
||||
folder=folder,
|
||||
recursive=recursive,
|
||||
)
|
||||
@@ -326,6 +329,7 @@ class RecipeQueryHandler:
|
||||
if recipe_scanner is None:
|
||||
raise RuntimeError("Recipe scanner unavailable")
|
||||
|
||||
limit = int(request.query.get("limit", "20"))
|
||||
cache = await recipe_scanner.get_cached_data()
|
||||
|
||||
base_model_counts: Dict[str, int] = {}
|
||||
@@ -341,6 +345,8 @@ class RecipeQueryHandler:
|
||||
for model, count in base_model_counts.items()
|
||||
]
|
||||
sorted_models.sort(key=lambda entry: entry["count"], reverse=True)
|
||||
if limit > 0:
|
||||
sorted_models = sorted_models[:limit]
|
||||
return web.json_response({"success": True, "base_models": sorted_models})
|
||||
except Exception as exc:
|
||||
self._logger.error("Error retrieving base models: %s", exc, exc_info=True)
|
||||
@@ -423,6 +429,28 @@ class RecipeQueryHandler:
|
||||
self._logger.error("Error getting recipes for Lora: %s", exc)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def get_recipes_for_checkpoint(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
await self._ensure_dependencies_ready()
|
||||
recipe_scanner = self._recipe_scanner_getter()
|
||||
if recipe_scanner is None:
|
||||
raise RuntimeError("Recipe scanner unavailable")
|
||||
|
||||
checkpoint_hash = request.query.get("hash")
|
||||
if not checkpoint_hash:
|
||||
return web.json_response(
|
||||
{"success": False, "error": "Checkpoint hash is required"},
|
||||
status=400,
|
||||
)
|
||||
|
||||
matching_recipes = await recipe_scanner.get_recipes_for_checkpoint(
|
||||
checkpoint_hash
|
||||
)
|
||||
return web.json_response({"success": True, "recipes": matching_recipes})
|
||||
except Exception as exc:
|
||||
self._logger.error("Error getting recipes for checkpoint: %s", exc)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
async def scan_recipes(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
await self._ensure_dependencies_ready()
|
||||
@@ -731,6 +759,14 @@ class RecipeManagementHandler:
|
||||
)
|
||||
gen_params_request = self._parse_gen_params(params.get("gen_params"))
|
||||
|
||||
self._logger.info(
|
||||
"Remote recipe import received: url=%s, request_gen_params_keys=%s, lora_count=%d, checkpoint_keys=%s",
|
||||
image_url,
|
||||
sorted(gen_params_request.keys()) if gen_params_request else [],
|
||||
len(lora_entries),
|
||||
sorted(checkpoint_entry.keys()) if isinstance(checkpoint_entry, dict) else [],
|
||||
)
|
||||
|
||||
# 2. Initial Metadata Construction
|
||||
metadata: Dict[str, Any] = {
|
||||
"base_model": params.get("base_model", "") or "",
|
||||
@@ -1163,13 +1199,15 @@ class RecipeManagementHandler:
|
||||
temp_path = temp_file.name
|
||||
download_url = image_url
|
||||
image_info = None
|
||||
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", image_url)
|
||||
if civitai_match:
|
||||
civitai_image_id = extract_civitai_image_id(image_url)
|
||||
if civitai_image_id:
|
||||
if civitai_client is None:
|
||||
raise RecipeDownloadError(
|
||||
"Civitai client unavailable for image download"
|
||||
)
|
||||
image_info = await civitai_client.get_image_info(civitai_match.group(1))
|
||||
image_info = await civitai_client.get_image_info(
|
||||
civitai_image_id, source_url=image_url
|
||||
)
|
||||
if not image_info:
|
||||
raise RecipeDownloadError(
|
||||
"Failed to fetch image information from Civitai"
|
||||
@@ -1203,7 +1241,7 @@ class RecipeManagementHandler:
|
||||
return (
|
||||
file_obj.read(),
|
||||
extension,
|
||||
image_info.get("meta") if civitai_match and image_info else None,
|
||||
image_info.get("meta") if civitai_image_id and image_info else None,
|
||||
)
|
||||
except RecipeDownloadError:
|
||||
raise
|
||||
|
||||
@@ -22,11 +22,17 @@ class RouteDefinition:
|
||||
MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("GET", "/api/lm/settings", "get_settings"),
|
||||
RouteDefinition("POST", "/api/lm/settings", "update_settings"),
|
||||
RouteDefinition("GET", "/api/lm/doctor/diagnostics", "get_doctor_diagnostics"),
|
||||
RouteDefinition("POST", "/api/lm/doctor/repair-cache", "repair_doctor_cache"),
|
||||
RouteDefinition("POST", "/api/lm/doctor/resolve-filename-conflicts", "resolve_doctor_filename_conflicts"),
|
||||
RouteDefinition("POST", "/api/lm/doctor/export-bundle", "export_doctor_bundle"),
|
||||
RouteDefinition("GET", "/api/lm/priority-tags", "get_priority_tags"),
|
||||
RouteDefinition("GET", "/api/lm/settings/libraries", "get_settings_libraries"),
|
||||
RouteDefinition("POST", "/api/lm/settings/libraries/activate", "activate_library"),
|
||||
RouteDefinition("GET", "/api/lm/health-check", "health_check"),
|
||||
RouteDefinition("GET", "/api/lm/supporters", "get_supporters"),
|
||||
RouteDefinition("GET", "/api/lm/wildcards/search", "search_wildcards"),
|
||||
RouteDefinition("POST", "/api/lm/wildcards/open-location", "open_wildcards_location"),
|
||||
RouteDefinition("POST", "/api/lm/open-file-location", "open_file_location"),
|
||||
RouteDefinition("POST", "/api/lm/update-usage-stats", "update_usage_stats"),
|
||||
RouteDefinition("GET", "/api/lm/get-usage-stats", "get_usage_stats"),
|
||||
@@ -37,6 +43,22 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("POST", "/api/lm/update-node-widget", "update_node_widget"),
|
||||
RouteDefinition("GET", "/api/lm/get-registry", "get_registry"),
|
||||
RouteDefinition("GET", "/api/lm/check-model-exists", "check_model_exists"),
|
||||
RouteDefinition("GET", "/api/lm/check-models-exist", "check_models_exist"),
|
||||
RouteDefinition(
|
||||
"GET",
|
||||
"/api/lm/model-version-download-status",
|
||||
"get_model_version_download_status",
|
||||
),
|
||||
RouteDefinition(
|
||||
"POST",
|
||||
"/api/lm/model-version-download-status",
|
||||
"set_model_version_download_status",
|
||||
),
|
||||
RouteDefinition(
|
||||
"GET",
|
||||
"/api/lm/set-model-version-download-status",
|
||||
"set_model_version_download_status",
|
||||
),
|
||||
RouteDefinition("GET", "/api/lm/civitai/user-models", "get_civitai_user_models"),
|
||||
RouteDefinition(
|
||||
"POST", "/api/lm/download-metadata-archive", "download_metadata_archive"
|
||||
@@ -47,6 +69,10 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"
|
||||
),
|
||||
RouteDefinition("GET", "/api/lm/backup/status", "get_backup_status"),
|
||||
RouteDefinition("POST", "/api/lm/backup/export", "export_backup"),
|
||||
RouteDefinition("POST", "/api/lm/backup/import", "import_backup"),
|
||||
RouteDefinition("POST", "/api/lm/backup/open-location", "open_backup_location"),
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/model-versions-status", "get_model_versions_status"
|
||||
),
|
||||
@@ -56,6 +82,18 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/example-workflows/{filename}", "get_example_workflow"
|
||||
),
|
||||
# Base model management routes
|
||||
RouteDefinition("GET", "/api/lm/base-models", "get_base_models"),
|
||||
RouteDefinition("POST", "/api/lm/base-models/refresh", "refresh_base_models"),
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/base-models/categories", "get_base_model_categories"
|
||||
),
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/base-models/cache-status", "get_base_model_cache_status"
|
||||
),
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/delete-model-version", "delete_model_version"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -19,10 +19,12 @@ from ..services.downloader import get_downloader
|
||||
from ..utils.usage_stats import UsageStats
|
||||
from .handlers.misc_handlers import (
|
||||
CustomWordsHandler,
|
||||
DoctorHandler,
|
||||
ExampleWorkflowsHandler,
|
||||
FileSystemHandler,
|
||||
HealthCheckHandler,
|
||||
LoraCodeHandler,
|
||||
BackupHandler,
|
||||
MetadataArchiveHandler,
|
||||
MiscHandlerSet,
|
||||
ModelExampleFilesHandler,
|
||||
@@ -33,8 +35,10 @@ from .handlers.misc_handlers import (
|
||||
SupportersHandler,
|
||||
TrainedWordsHandler,
|
||||
UsageStatsHandler,
|
||||
WildcardsHandler,
|
||||
build_service_registry_adapter,
|
||||
)
|
||||
from .handlers.base_model_handlers import BaseModelHandlerSet
|
||||
from .misc_route_registrar import MiscRouteRegistrar
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -115,6 +119,7 @@ class MiscRoutes:
|
||||
settings_service=self._settings,
|
||||
metadata_provider_updater=self._metadata_provider_updater,
|
||||
)
|
||||
backup = BackupHandler()
|
||||
filesystem = FileSystemHandler(settings_service=self._settings)
|
||||
node_registry_handler = NodeRegistryHandler(
|
||||
node_registry=self._node_registry,
|
||||
@@ -126,8 +131,11 @@ class MiscRoutes:
|
||||
metadata_provider_factory=self._metadata_provider_factory,
|
||||
)
|
||||
custom_words = CustomWordsHandler()
|
||||
wildcards = WildcardsHandler()
|
||||
supporters = SupportersHandler()
|
||||
doctor = DoctorHandler(settings_service=self._settings)
|
||||
example_workflows = ExampleWorkflowsHandler()
|
||||
base_model = BaseModelHandlerSet()
|
||||
|
||||
return self._handler_set_factory(
|
||||
health=health,
|
||||
@@ -139,10 +147,14 @@ class MiscRoutes:
|
||||
node_registry=node_registry_handler,
|
||||
model_library=model_library,
|
||||
metadata_archive=metadata_archive,
|
||||
backup=backup,
|
||||
filesystem=filesystem,
|
||||
custom_words=custom_words,
|
||||
wildcards=wildcards,
|
||||
supporters=supporters,
|
||||
doctor=doctor,
|
||||
example_workflows=example_workflows,
|
||||
base_model=base_model,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -22,8 +22,10 @@ class RouteDefinition:
|
||||
|
||||
COMMON_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/list", "get_models"),
|
||||
RouteDefinition("GET", "/api/lm/{prefix}/excluded", "get_excluded_models"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/delete", "delete_model"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/exclude", "exclude_model"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/unexclude", "unexclude_model"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/fetch-civitai", "fetch_civitai"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/fetch-all-civitai", "fetch_all_civitai"),
|
||||
RouteDefinition("POST", "/api/lm/{prefix}/relink-civitai", "relink_civitai"),
|
||||
|
||||
@@ -51,6 +51,9 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
"POST", "/api/lm/recipes/save-from-widget", "save_recipe_from_widget"
|
||||
),
|
||||
RouteDefinition("GET", "/api/lm/recipes/for-lora", "get_recipes_for_lora"),
|
||||
RouteDefinition(
|
||||
"GET", "/api/lm/recipes/for-checkpoint", "get_recipes_for_checkpoint"
|
||||
),
|
||||
RouteDefinition("GET", "/api/lm/recipes/scan", "scan_recipes"),
|
||||
RouteDefinition("POST", "/api/lm/recipes/repair", "repair_recipes"),
|
||||
RouteDefinition("POST", "/api/lm/recipes/cancel-repair", "cancel_repair"),
|
||||
|
||||
570
py/services/aria2_downloader.py
Normal file
570
py/services/aria2_downloader.py
Normal file
@@ -0,0 +1,570 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import secrets
|
||||
import shutil
|
||||
import socket
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Optional, Tuple
|
||||
|
||||
import aiohttp
|
||||
|
||||
from .downloader import DownloadProgress, get_downloader
|
||||
from .aria2_transfer_state import Aria2TransferStateStore
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
CIVITAI_DOWNLOAD_URL_PREFIXES = (
|
||||
"https://civitai.com/api/download/",
|
||||
"https://civitai.red/api/download/",
|
||||
)
|
||||
|
||||
|
||||
class Aria2Error(RuntimeError):
|
||||
"""Raised when aria2 integration fails."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class Aria2Transfer:
|
||||
"""Track an aria2 download registered by the Python coordinator."""
|
||||
|
||||
gid: str
|
||||
save_path: str
|
||||
|
||||
|
||||
class Aria2Downloader:
|
||||
"""Manage an aria2 RPC daemon for experimental model downloads."""
|
||||
|
||||
_instance = None
|
||||
_lock = asyncio.Lock()
|
||||
|
||||
@classmethod
|
||||
async def get_instance(cls) -> "Aria2Downloader":
|
||||
async with cls._lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = cls()
|
||||
return cls._instance
|
||||
|
||||
def __init__(self) -> None:
|
||||
if hasattr(self, "_initialized"):
|
||||
return
|
||||
|
||||
self._initialized = True
|
||||
self._process: Optional[asyncio.subprocess.Process] = None
|
||||
self._rpc_port: Optional[int] = None
|
||||
self._rpc_secret = ""
|
||||
self._rpc_url = ""
|
||||
self._rpc_session: Optional[aiohttp.ClientSession] = None
|
||||
self._rpc_session_lock = asyncio.Lock()
|
||||
self._process_lock = asyncio.Lock()
|
||||
self._transfers: Dict[str, Aria2Transfer] = {}
|
||||
self._poll_interval = 0.5
|
||||
self._state_store = Aria2TransferStateStore()
|
||||
|
||||
@property
|
||||
def is_running(self) -> bool:
|
||||
return self._process is not None and self._process.returncode is None
|
||||
|
||||
async def download_file(
|
||||
self,
|
||||
url: str,
|
||||
save_path: str,
|
||||
*,
|
||||
download_id: str,
|
||||
progress_callback=None,
|
||||
headers: Optional[Dict[str, str]] = None,
|
||||
) -> Tuple[bool, str]:
|
||||
"""Download a file using aria2 RPC and wait for completion."""
|
||||
|
||||
await self._ensure_process()
|
||||
save_path = os.path.abspath(save_path)
|
||||
transfer = self._transfers.get(download_id)
|
||||
if transfer is None or os.path.abspath(transfer.save_path) != save_path:
|
||||
gid = await self._schedule_download(
|
||||
url,
|
||||
save_path,
|
||||
download_id=download_id,
|
||||
headers=headers,
|
||||
)
|
||||
transfer = Aria2Transfer(gid=gid, save_path=save_path)
|
||||
self._transfers[download_id] = transfer
|
||||
|
||||
try:
|
||||
while True:
|
||||
status = await self.get_status(download_id)
|
||||
if status is None:
|
||||
return False, "aria2 download not found"
|
||||
|
||||
snapshot = self._build_progress_snapshot(status)
|
||||
if progress_callback is not None:
|
||||
await self._dispatch_progress(progress_callback, snapshot)
|
||||
|
||||
state = status.get("status", "")
|
||||
if state == "complete":
|
||||
completed_path = self._resolve_completed_path(status, save_path)
|
||||
return True, completed_path
|
||||
if state == "error":
|
||||
return False, status.get("errorMessage") or "aria2 download failed"
|
||||
if state == "removed":
|
||||
return False, "Download was cancelled"
|
||||
|
||||
await asyncio.sleep(self._poll_interval)
|
||||
finally:
|
||||
self._transfers.pop(download_id, None)
|
||||
|
||||
async def _schedule_download(
|
||||
self,
|
||||
url: str,
|
||||
save_path: str,
|
||||
*,
|
||||
download_id: str,
|
||||
headers: Optional[Dict[str, str]] = None,
|
||||
) -> str:
|
||||
save_dir = os.path.dirname(save_path)
|
||||
out_name = os.path.basename(save_path)
|
||||
|
||||
Path(save_dir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
resolved_url = url
|
||||
request_headers = headers
|
||||
if headers and url.startswith(CIVITAI_DOWNLOAD_URL_PREFIXES):
|
||||
resolved_url = await self._resolve_authenticated_redirect_url(url, headers)
|
||||
if resolved_url != url:
|
||||
request_headers = None
|
||||
logger.debug(
|
||||
"Resolved Civitai download %s to signed URL for aria2",
|
||||
download_id,
|
||||
)
|
||||
|
||||
options: Dict[str, str] = {
|
||||
"dir": save_dir,
|
||||
"out": out_name,
|
||||
"continue": "true",
|
||||
"max-connection-per-server": "4",
|
||||
"split": "4",
|
||||
"min-split-size": "1M",
|
||||
"allow-overwrite": "true",
|
||||
"auto-file-renaming": "false",
|
||||
"file-allocation": "none",
|
||||
}
|
||||
if request_headers:
|
||||
options["header"] = [
|
||||
f"{key}: {value}" for key, value in request_headers.items()
|
||||
]
|
||||
|
||||
logger.debug(
|
||||
"Submitting aria2 download %s -> %s (auth=%s, civitai_signed=%s)",
|
||||
download_id,
|
||||
save_path,
|
||||
bool(request_headers),
|
||||
resolved_url != url,
|
||||
)
|
||||
|
||||
try:
|
||||
gid = await self._rpc_call("aria2.addUri", [[resolved_url], options])
|
||||
except Exception as exc:
|
||||
raise Aria2Error(f"Failed to schedule aria2 download: {exc}") from exc
|
||||
|
||||
logger.debug("aria2 accepted download %s with gid %s", download_id, gid)
|
||||
await self._state_store.upsert(
|
||||
download_id,
|
||||
{
|
||||
"gid": gid,
|
||||
"save_path": save_path,
|
||||
"status": "downloading",
|
||||
"url": url,
|
||||
},
|
||||
)
|
||||
return gid
|
||||
|
||||
async def get_status(self, download_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Return the raw aria2 status payload for a known download."""
|
||||
|
||||
transfer = self._transfers.get(download_id)
|
||||
if transfer is None:
|
||||
return None
|
||||
|
||||
keys = [
|
||||
"gid",
|
||||
"status",
|
||||
"totalLength",
|
||||
"completedLength",
|
||||
"downloadSpeed",
|
||||
"errorMessage",
|
||||
"files",
|
||||
]
|
||||
try:
|
||||
status = await self._rpc_call("aria2.tellStatus", [transfer.gid, keys])
|
||||
except Exception as exc:
|
||||
raise Aria2Error(f"Failed to query aria2 download status: {exc}") from exc
|
||||
|
||||
if isinstance(status, dict):
|
||||
return status
|
||||
return None
|
||||
|
||||
async def get_status_by_gid(self, gid: str) -> Optional[Dict[str, Any]]:
|
||||
keys = [
|
||||
"gid",
|
||||
"status",
|
||||
"totalLength",
|
||||
"completedLength",
|
||||
"downloadSpeed",
|
||||
"errorMessage",
|
||||
"files",
|
||||
]
|
||||
try:
|
||||
status = await self._rpc_call("aria2.tellStatus", [gid, keys])
|
||||
except Exception as exc:
|
||||
message = str(exc)
|
||||
if "cannot be found" in message.lower() or "not found" in message.lower():
|
||||
return None
|
||||
raise Aria2Error(f"Failed to query aria2 download status: {exc}") from exc
|
||||
|
||||
if isinstance(status, dict):
|
||||
return status
|
||||
return None
|
||||
|
||||
async def restore_transfer(self, download_id: str, gid: str, save_path: str) -> None:
|
||||
await self._ensure_process()
|
||||
self._transfers[download_id] = Aria2Transfer(
|
||||
gid=gid,
|
||||
save_path=os.path.abspath(save_path),
|
||||
)
|
||||
|
||||
async def reassign_transfer(
|
||||
self, from_download_id: str, to_download_id: str
|
||||
) -> Optional[Aria2Transfer]:
|
||||
transfer = self._transfers.get(from_download_id)
|
||||
if transfer is None:
|
||||
return None
|
||||
|
||||
self._transfers[to_download_id] = transfer
|
||||
if from_download_id != to_download_id:
|
||||
self._transfers.pop(from_download_id, None)
|
||||
return transfer
|
||||
|
||||
async def has_transfer(self, download_id: str) -> bool:
|
||||
return download_id in self._transfers
|
||||
|
||||
async def pause_download(self, download_id: str) -> Dict[str, Any]:
|
||||
transfer = self._transfers.get(download_id)
|
||||
if transfer is None:
|
||||
return {"success": False, "error": "Download task not found"}
|
||||
|
||||
try:
|
||||
await self._rpc_call("aria2.forcePause", [transfer.gid])
|
||||
except Exception as exc:
|
||||
return {"success": False, "error": str(exc)}
|
||||
|
||||
await self._state_store.upsert(download_id, {"status": "paused"})
|
||||
return {"success": True, "message": "Download paused successfully"}
|
||||
|
||||
async def resume_download(self, download_id: str) -> Dict[str, Any]:
|
||||
transfer = self._transfers.get(download_id)
|
||||
if transfer is None:
|
||||
return {"success": False, "error": "Download task not found"}
|
||||
|
||||
try:
|
||||
await self._rpc_call("aria2.unpause", [transfer.gid])
|
||||
except Exception as exc:
|
||||
return {"success": False, "error": str(exc)}
|
||||
|
||||
await self._state_store.upsert(download_id, {"status": "downloading"})
|
||||
return {"success": True, "message": "Download resumed successfully"}
|
||||
|
||||
async def cancel_download(self, download_id: str) -> Dict[str, Any]:
|
||||
transfer = self._transfers.get(download_id)
|
||||
if transfer is None:
|
||||
return {"success": False, "error": "Download task not found"}
|
||||
|
||||
try:
|
||||
await self._rpc_call("aria2.forceRemove", [transfer.gid])
|
||||
except Exception as exc:
|
||||
return {"success": False, "error": str(exc)}
|
||||
|
||||
await self._state_store.remove(download_id)
|
||||
return {"success": True, "message": "Download cancelled successfully"}
|
||||
|
||||
async def close(self) -> None:
|
||||
"""Shut down the RPC process and session."""
|
||||
|
||||
if self._rpc_session is not None:
|
||||
await self._rpc_session.close()
|
||||
self._rpc_session = None
|
||||
|
||||
process = self._process
|
||||
self._process = None
|
||||
self._transfers.clear()
|
||||
|
||||
if process is None:
|
||||
return
|
||||
|
||||
if process.returncode is None:
|
||||
process.terminate()
|
||||
try:
|
||||
await asyncio.wait_for(process.wait(), timeout=5.0)
|
||||
except asyncio.TimeoutError:
|
||||
process.kill()
|
||||
await process.wait()
|
||||
|
||||
async def _dispatch_progress(self, callback, snapshot: DownloadProgress) -> None:
|
||||
try:
|
||||
result = callback(snapshot, snapshot)
|
||||
except TypeError:
|
||||
result = callback(snapshot.percent_complete)
|
||||
|
||||
if asyncio.iscoroutine(result):
|
||||
await result
|
||||
elif hasattr(result, "__await__"):
|
||||
await result
|
||||
|
||||
def _build_progress_snapshot(self, status: Dict[str, Any]) -> DownloadProgress:
|
||||
completed = self._parse_int(status.get("completedLength"))
|
||||
total = self._parse_int(status.get("totalLength"))
|
||||
speed = float(self._parse_int(status.get("downloadSpeed")))
|
||||
percent = 0.0
|
||||
if total > 0:
|
||||
percent = (completed / total) * 100.0
|
||||
|
||||
return DownloadProgress(
|
||||
percent_complete=max(0.0, min(percent, 100.0)),
|
||||
bytes_downloaded=completed,
|
||||
total_bytes=total or None,
|
||||
bytes_per_second=speed,
|
||||
timestamp=datetime.now().timestamp(),
|
||||
)
|
||||
|
||||
def _resolve_completed_path(self, status: Dict[str, Any], default_path: str) -> str:
|
||||
files = status.get("files")
|
||||
if isinstance(files, list) and files:
|
||||
first = files[0]
|
||||
if isinstance(first, dict):
|
||||
candidate = first.get("path")
|
||||
if isinstance(candidate, str) and candidate:
|
||||
return candidate
|
||||
return default_path
|
||||
|
||||
@staticmethod
|
||||
def _parse_int(value: Any) -> int:
|
||||
try:
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return 0
|
||||
|
||||
async def _resolve_authenticated_redirect_url(
|
||||
self,
|
||||
url: str,
|
||||
headers: Dict[str, str],
|
||||
) -> str:
|
||||
downloader = await get_downloader()
|
||||
session = await downloader.session
|
||||
request_headers = dict(downloader.default_headers)
|
||||
request_headers.update(headers)
|
||||
request_headers["Accept-Encoding"] = "identity"
|
||||
|
||||
try:
|
||||
async with session.get(
|
||||
url,
|
||||
headers=request_headers,
|
||||
allow_redirects=False,
|
||||
proxy=downloader.proxy_url,
|
||||
) as response:
|
||||
if response.status in {301, 302, 303, 307, 308}:
|
||||
location = response.headers.get("Location")
|
||||
if location:
|
||||
return location
|
||||
raise Aria2Error(
|
||||
"Authenticated Civitai redirect did not include a Location header"
|
||||
)
|
||||
|
||||
if response.status == 200:
|
||||
return url
|
||||
|
||||
body = await response.text()
|
||||
raise Aria2Error(
|
||||
f"Failed to resolve authenticated Civitai redirect: status={response.status} body={body[:300]}"
|
||||
)
|
||||
except aiohttp.ClientError as exc:
|
||||
raise Aria2Error(
|
||||
f"Failed to resolve authenticated Civitai redirect: {exc}"
|
||||
) from exc
|
||||
|
||||
async def _ensure_process(self) -> None:
|
||||
async with self._process_lock:
|
||||
if self.is_running and await self._ping():
|
||||
return
|
||||
|
||||
await self.close()
|
||||
|
||||
executable = self._resolve_executable()
|
||||
self._rpc_port = self._find_free_port()
|
||||
self._rpc_secret = secrets.token_hex(16)
|
||||
self._rpc_url = f"http://127.0.0.1:{self._rpc_port}/jsonrpc"
|
||||
|
||||
command = [
|
||||
executable,
|
||||
"--enable-rpc=true",
|
||||
"--rpc-listen-all=false",
|
||||
f"--rpc-listen-port={self._rpc_port}",
|
||||
f"--rpc-secret={self._rpc_secret}",
|
||||
"--check-certificate=true",
|
||||
"--allow-overwrite=true",
|
||||
"--auto-file-renaming=false",
|
||||
"--file-allocation=none",
|
||||
"--max-concurrent-downloads=5",
|
||||
"--continue=true",
|
||||
"--daemon=false",
|
||||
"--quiet=true",
|
||||
f"--stop-with-process={os.getpid()}",
|
||||
]
|
||||
|
||||
logger.info("Starting aria2 RPC daemon from %s", executable)
|
||||
self._process = await asyncio.create_subprocess_exec(
|
||||
*command,
|
||||
stdout=asyncio.subprocess.DEVNULL,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
)
|
||||
|
||||
await self._wait_until_ready()
|
||||
|
||||
def _resolve_executable(self) -> str:
|
||||
settings = get_settings_manager()
|
||||
configured_path = (settings.get("aria2c_path") or "").strip()
|
||||
candidate = configured_path or "aria2c"
|
||||
|
||||
resolved = shutil.which(candidate)
|
||||
if resolved:
|
||||
return resolved
|
||||
|
||||
if configured_path and os.path.isfile(configured_path) and os.access(
|
||||
configured_path, os.X_OK
|
||||
):
|
||||
return configured_path
|
||||
|
||||
raise Aria2Error(
|
||||
"aria2c executable was not found. Install aria2 or configure aria2c_path."
|
||||
)
|
||||
|
||||
async def _wait_until_ready(self) -> None:
|
||||
assert self._process is not None
|
||||
|
||||
start_time = asyncio.get_running_loop().time()
|
||||
last_error = ""
|
||||
while asyncio.get_running_loop().time() - start_time < 10.0:
|
||||
if self._process.returncode is not None:
|
||||
stderr_output = ""
|
||||
if self._process.stderr is not None:
|
||||
try:
|
||||
stderr_output = (
|
||||
await asyncio.wait_for(self._process.stderr.read(), timeout=0.2)
|
||||
).decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
stderr_output = ""
|
||||
raise Aria2Error(
|
||||
f"aria2 RPC process exited early with code {self._process.returncode}: {stderr_output.strip()}"
|
||||
)
|
||||
|
||||
try:
|
||||
if await self._ping():
|
||||
return
|
||||
except Exception as exc: # pragma: no cover - startup race
|
||||
last_error = str(exc)
|
||||
|
||||
await asyncio.sleep(0.2)
|
||||
|
||||
raise Aria2Error(
|
||||
f"Timed out waiting for aria2 RPC to become ready{': ' + last_error if last_error else ''}"
|
||||
)
|
||||
|
||||
async def _ping(self) -> bool:
|
||||
try:
|
||||
result = await self._rpc_call("aria2.getVersion", [])
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
return isinstance(result, dict)
|
||||
|
||||
async def _rpc_call(self, method: str, params: list[Any]) -> Any:
|
||||
if not self._rpc_url:
|
||||
raise Aria2Error("aria2 RPC endpoint is not initialized")
|
||||
|
||||
session = await self._get_rpc_session()
|
||||
payload = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": secrets.token_hex(8),
|
||||
"method": method,
|
||||
"params": [f"token:{self._rpc_secret}", *params],
|
||||
}
|
||||
|
||||
async with session.post(self._rpc_url, json=payload) as response:
|
||||
text = await response.text()
|
||||
|
||||
try:
|
||||
body = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
body = None
|
||||
|
||||
if body is None:
|
||||
if response.status != 200:
|
||||
raise Aria2Error(
|
||||
f"aria2 RPC returned status {response.status} with non-JSON body: {text}"
|
||||
)
|
||||
raise Aria2Error(f"Invalid aria2 RPC response: {text}")
|
||||
|
||||
if "error" in body:
|
||||
error = body["error"] or {}
|
||||
code = error.get("code") if isinstance(error, dict) else None
|
||||
message = error.get("message") if isinstance(error, dict) else str(error)
|
||||
logger.error(
|
||||
"aria2 RPC %s failed with HTTP %s, code=%s, message=%s",
|
||||
method,
|
||||
response.status,
|
||||
code,
|
||||
message,
|
||||
)
|
||||
status_message = (
|
||||
f"aria2 RPC {method} failed with status {response.status}: {message}"
|
||||
if response.status != 200
|
||||
else message
|
||||
)
|
||||
raise Aria2Error(status_message or "Unknown aria2 RPC error")
|
||||
|
||||
if response.status != 200:
|
||||
logger.error(
|
||||
"aria2 RPC %s returned unexpected HTTP status %s without error payload: %s",
|
||||
method,
|
||||
response.status,
|
||||
body,
|
||||
)
|
||||
raise Aria2Error(
|
||||
f"aria2 RPC {method} returned unexpected status {response.status}"
|
||||
)
|
||||
|
||||
return body.get("result")
|
||||
|
||||
async def _get_rpc_session(self) -> aiohttp.ClientSession:
|
||||
if self._rpc_session is None or self._rpc_session.closed:
|
||||
async with self._rpc_session_lock:
|
||||
if self._rpc_session is None or self._rpc_session.closed:
|
||||
timeout = aiohttp.ClientTimeout(total=30)
|
||||
self._rpc_session = aiohttp.ClientSession(timeout=timeout)
|
||||
return self._rpc_session
|
||||
|
||||
@staticmethod
|
||||
def _find_free_port() -> int:
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
|
||||
sock.bind(("127.0.0.1", 0))
|
||||
sock.listen(1)
|
||||
return int(sock.getsockname()[1])
|
||||
|
||||
|
||||
async def get_aria2_downloader() -> Aria2Downloader:
|
||||
"""Get the singleton aria2 downloader."""
|
||||
|
||||
return await Aria2Downloader.get_instance()
|
||||
108
py/services/aria2_transfer_state.py
Normal file
108
py/services/aria2_transfer_state.py
Normal file
@@ -0,0 +1,108 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
from copy import deepcopy
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from ..utils.cache_paths import get_cache_base_dir
|
||||
|
||||
|
||||
def get_aria2_state_path() -> str:
|
||||
base_dir = get_cache_base_dir(create=True)
|
||||
state_dir = os.path.join(base_dir, "aria2")
|
||||
os.makedirs(state_dir, exist_ok=True)
|
||||
return os.path.join(state_dir, "downloads.json")
|
||||
|
||||
|
||||
class Aria2TransferStateStore:
|
||||
"""Persist aria2 transfer metadata needed for restart recovery."""
|
||||
|
||||
_locks_by_path: Dict[str, asyncio.Lock] = {}
|
||||
|
||||
def __init__(self, state_path: Optional[str] = None) -> None:
|
||||
self._state_path = os.path.abspath(state_path or get_aria2_state_path())
|
||||
self._lock = self._locks_by_path.setdefault(self._state_path, asyncio.Lock())
|
||||
|
||||
def _read_all_unlocked(self) -> Dict[str, Dict[str, Any]]:
|
||||
try:
|
||||
with open(self._state_path, "r", encoding="utf-8") as handle:
|
||||
data = json.load(handle)
|
||||
except FileNotFoundError:
|
||||
return {}
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
|
||||
if not isinstance(data, dict):
|
||||
return {}
|
||||
|
||||
normalized: Dict[str, Dict[str, Any]] = {}
|
||||
for download_id, entry in data.items():
|
||||
if isinstance(download_id, str) and isinstance(entry, dict):
|
||||
normalized[download_id] = entry
|
||||
return normalized
|
||||
|
||||
def _write_all_unlocked(self, data: Dict[str, Dict[str, Any]]) -> None:
|
||||
directory = os.path.dirname(self._state_path)
|
||||
if directory:
|
||||
os.makedirs(directory, exist_ok=True)
|
||||
|
||||
temp_path = f"{self._state_path}.tmp"
|
||||
with open(temp_path, "w", encoding="utf-8") as handle:
|
||||
json.dump(data, handle, ensure_ascii=True, indent=2, sort_keys=True)
|
||||
os.replace(temp_path, self._state_path)
|
||||
|
||||
async def load_all(self) -> Dict[str, Dict[str, Any]]:
|
||||
async with self._lock:
|
||||
return deepcopy(self._read_all_unlocked())
|
||||
|
||||
async def get(self, download_id: str) -> Optional[Dict[str, Any]]:
|
||||
async with self._lock:
|
||||
return deepcopy(self._read_all_unlocked().get(download_id))
|
||||
|
||||
async def upsert(self, download_id: str, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
async with self._lock:
|
||||
data = self._read_all_unlocked()
|
||||
current = data.get(download_id, {})
|
||||
current.update(payload)
|
||||
data[download_id] = current
|
||||
self._write_all_unlocked(data)
|
||||
return deepcopy(current)
|
||||
|
||||
async def remove(self, download_id: str) -> None:
|
||||
async with self._lock:
|
||||
data = self._read_all_unlocked()
|
||||
if download_id in data:
|
||||
del data[download_id]
|
||||
self._write_all_unlocked(data)
|
||||
|
||||
async def find_by_save_path(
|
||||
self, save_path: str, *, exclude_download_id: Optional[str] = None
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
normalized_target = os.path.abspath(save_path)
|
||||
async with self._lock:
|
||||
data = self._read_all_unlocked()
|
||||
for download_id, entry in data.items():
|
||||
if exclude_download_id and download_id == exclude_download_id:
|
||||
continue
|
||||
candidate = entry.get("save_path")
|
||||
if isinstance(candidate, str) and os.path.abspath(candidate) == normalized_target:
|
||||
result = dict(entry)
|
||||
result["download_id"] = download_id
|
||||
return result
|
||||
return None
|
||||
|
||||
async def reassign(self, from_download_id: str, to_download_id: str) -> Optional[Dict[str, Any]]:
|
||||
async with self._lock:
|
||||
data = self._read_all_unlocked()
|
||||
existing = data.get(from_download_id)
|
||||
if existing is None:
|
||||
return None
|
||||
updated = dict(existing)
|
||||
updated["download_id"] = to_download_id
|
||||
data[to_download_id] = updated
|
||||
if from_download_id != to_download_id:
|
||||
data.pop(from_download_id, None)
|
||||
self._write_all_unlocked(data)
|
||||
return deepcopy(updated)
|
||||
411
py/services/backup_service.py
Normal file
411
py/services/backup_service.py
Normal file
@@ -0,0 +1,411 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import contextlib
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import time
|
||||
import zipfile
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterable, Optional
|
||||
|
||||
from ..utils.cache_paths import CacheType, get_cache_base_dir, get_cache_file_path
|
||||
from ..utils.settings_paths import get_settings_dir
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
BACKUP_MANIFEST_VERSION = 1
|
||||
DEFAULT_BACKUP_RETENTION_COUNT = 5
|
||||
DEFAULT_BACKUP_INTERVAL_SECONDS = 24 * 60 * 60
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class BackupEntry:
|
||||
kind: str
|
||||
archive_path: str
|
||||
target_path: str
|
||||
sha256: str
|
||||
size: int
|
||||
mtime: float
|
||||
|
||||
|
||||
class BackupService:
|
||||
"""Create and restore user-state backup archives."""
|
||||
|
||||
_instance: "BackupService | None" = None
|
||||
_instance_lock = asyncio.Lock()
|
||||
|
||||
def __init__(self, *, settings_manager=None, backup_dir: str | None = None) -> None:
|
||||
self._settings = settings_manager or get_settings_manager()
|
||||
self._backup_dir = Path(backup_dir or self._resolve_backup_dir())
|
||||
self._backup_dir.mkdir(parents=True, exist_ok=True)
|
||||
self._lock = asyncio.Lock()
|
||||
self._auto_task: asyncio.Task[None] | None = None
|
||||
|
||||
@classmethod
|
||||
async def get_instance(cls) -> "BackupService":
|
||||
async with cls._instance_lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = cls()
|
||||
cls._instance._ensure_auto_snapshot_task()
|
||||
return cls._instance
|
||||
|
||||
@staticmethod
|
||||
def _resolve_backup_dir() -> str:
|
||||
return os.path.join(get_settings_dir(create=True), "backups")
|
||||
|
||||
def get_backup_dir(self) -> str:
|
||||
return str(self._backup_dir)
|
||||
|
||||
def _ensure_auto_snapshot_task(self) -> None:
|
||||
if self._auto_task is not None and not self._auto_task.done():
|
||||
return
|
||||
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
except RuntimeError:
|
||||
return
|
||||
|
||||
self._auto_task = loop.create_task(self._auto_backup_loop())
|
||||
|
||||
def _get_setting_bool(self, key: str, default: bool) -> bool:
|
||||
try:
|
||||
return bool(self._settings.get(key, default))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
def _get_setting_int(self, key: str, default: int) -> int:
|
||||
try:
|
||||
value = self._settings.get(key, default)
|
||||
return max(1, int(value))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
def _settings_file_path(self) -> str:
|
||||
settings_file = getattr(self._settings, "settings_file", None)
|
||||
if settings_file:
|
||||
return str(settings_file)
|
||||
return os.path.join(get_settings_dir(create=True), "settings.json")
|
||||
|
||||
def _download_history_path(self) -> str:
|
||||
base_dir = get_cache_base_dir(create=True)
|
||||
history_dir = os.path.join(base_dir, "download_history")
|
||||
os.makedirs(history_dir, exist_ok=True)
|
||||
return os.path.join(history_dir, "downloaded_versions.sqlite")
|
||||
|
||||
def _model_update_dir(self) -> str:
|
||||
return str(Path(get_cache_file_path(CacheType.MODEL_UPDATE, create_dir=True)).parent)
|
||||
|
||||
def _model_update_targets(self) -> list[tuple[str, str, str]]:
|
||||
"""Return (kind, archive_path, target_path) tuples for backup."""
|
||||
|
||||
targets: list[tuple[str, str, str]] = []
|
||||
|
||||
settings_path = self._settings_file_path()
|
||||
targets.append(("settings", "settings/settings.json", settings_path))
|
||||
|
||||
history_path = self._download_history_path()
|
||||
targets.append(
|
||||
(
|
||||
"download_history",
|
||||
"cache/download_history/downloaded_versions.sqlite",
|
||||
history_path,
|
||||
)
|
||||
)
|
||||
|
||||
symlink_path = get_cache_file_path(CacheType.SYMLINK, create_dir=True)
|
||||
targets.append(
|
||||
(
|
||||
"symlink_map",
|
||||
"cache/symlink/symlink_map.json",
|
||||
symlink_path,
|
||||
)
|
||||
)
|
||||
|
||||
model_update_dir = Path(self._model_update_dir())
|
||||
if model_update_dir.exists():
|
||||
for sqlite_file in sorted(model_update_dir.glob("*.sqlite")):
|
||||
targets.append(
|
||||
(
|
||||
"model_update",
|
||||
f"cache/model_update/{sqlite_file.name}",
|
||||
str(sqlite_file),
|
||||
)
|
||||
)
|
||||
|
||||
return targets
|
||||
|
||||
@staticmethod
|
||||
def _hash_file(path: str) -> tuple[str, int, float]:
|
||||
digest = hashlib.sha256()
|
||||
total = 0
|
||||
with open(path, "rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
|
||||
total += len(chunk)
|
||||
digest.update(chunk)
|
||||
mtime = os.path.getmtime(path)
|
||||
return digest.hexdigest(), total, mtime
|
||||
|
||||
def _build_manifest(self, entries: Iterable[BackupEntry], *, snapshot_type: str) -> dict[str, Any]:
|
||||
created_at = datetime.now(timezone.utc).isoformat()
|
||||
active_library = None
|
||||
try:
|
||||
active_library = self._settings.get_active_library_name()
|
||||
except Exception:
|
||||
active_library = None
|
||||
|
||||
return {
|
||||
"manifest_version": BACKUP_MANIFEST_VERSION,
|
||||
"created_at": created_at,
|
||||
"snapshot_type": snapshot_type,
|
||||
"active_library": active_library,
|
||||
"files": [
|
||||
{
|
||||
"kind": entry.kind,
|
||||
"archive_path": entry.archive_path,
|
||||
"target_path": entry.target_path,
|
||||
"sha256": entry.sha256,
|
||||
"size": entry.size,
|
||||
"mtime": entry.mtime,
|
||||
}
|
||||
for entry in entries
|
||||
],
|
||||
}
|
||||
|
||||
def _write_archive(self, archive_path: str, entries: list[BackupEntry], manifest: dict[str, Any]) -> None:
|
||||
with zipfile.ZipFile(
|
||||
archive_path,
|
||||
mode="w",
|
||||
compression=zipfile.ZIP_DEFLATED,
|
||||
compresslevel=6,
|
||||
) as zf:
|
||||
zf.writestr(
|
||||
"manifest.json",
|
||||
json.dumps(manifest, indent=2, ensure_ascii=False).encode("utf-8"),
|
||||
)
|
||||
for entry in entries:
|
||||
zf.write(entry.target_path, arcname=entry.archive_path)
|
||||
|
||||
async def create_snapshot(self, *, snapshot_type: str = "manual", persist: bool = False) -> dict[str, Any]:
|
||||
"""Create a backup archive.
|
||||
|
||||
If ``persist`` is true, the archive is stored in the backup directory
|
||||
and retained according to the configured retention policy.
|
||||
"""
|
||||
|
||||
async with self._lock:
|
||||
raw_targets = self._model_update_targets()
|
||||
entries: list[BackupEntry] = []
|
||||
for kind, archive_path, target_path in raw_targets:
|
||||
if not os.path.exists(target_path):
|
||||
continue
|
||||
sha256, size, mtime = self._hash_file(target_path)
|
||||
entries.append(
|
||||
BackupEntry(
|
||||
kind=kind,
|
||||
archive_path=archive_path,
|
||||
target_path=target_path,
|
||||
sha256=sha256,
|
||||
size=size,
|
||||
mtime=mtime,
|
||||
)
|
||||
)
|
||||
|
||||
if not entries:
|
||||
raise FileNotFoundError("No backupable files were found")
|
||||
|
||||
manifest = self._build_manifest(entries, snapshot_type=snapshot_type)
|
||||
archive_name = self._build_archive_name(snapshot_type=snapshot_type)
|
||||
fd, temp_path = tempfile.mkstemp(suffix=".zip", dir=str(self._backup_dir))
|
||||
os.close(fd)
|
||||
|
||||
try:
|
||||
self._write_archive(temp_path, entries, manifest)
|
||||
if persist:
|
||||
final_path = self._backup_dir / archive_name
|
||||
os.replace(temp_path, final_path)
|
||||
self._prune_snapshots()
|
||||
return {
|
||||
"archive_path": str(final_path),
|
||||
"archive_name": final_path.name,
|
||||
"manifest": manifest,
|
||||
}
|
||||
|
||||
with open(temp_path, "rb") as handle:
|
||||
data = handle.read()
|
||||
return {
|
||||
"archive_name": archive_name,
|
||||
"archive_bytes": data,
|
||||
"manifest": manifest,
|
||||
}
|
||||
finally:
|
||||
with contextlib.suppress(FileNotFoundError):
|
||||
os.remove(temp_path)
|
||||
|
||||
def _build_archive_name(self, *, snapshot_type: str) -> str:
|
||||
timestamp = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
|
||||
return f"lora-manager-backup-{timestamp}-{snapshot_type}.zip"
|
||||
|
||||
def _prune_snapshots(self) -> None:
|
||||
retention = self._get_setting_int(
|
||||
"backup_retention_count", DEFAULT_BACKUP_RETENTION_COUNT
|
||||
)
|
||||
archives = sorted(
|
||||
self._backup_dir.glob("lora-manager-backup-*-auto.zip"),
|
||||
key=lambda path: path.stat().st_mtime,
|
||||
reverse=True,
|
||||
)
|
||||
for path in archives[retention:]:
|
||||
with contextlib.suppress(OSError):
|
||||
path.unlink()
|
||||
|
||||
async def restore_snapshot(self, archive_path: str) -> dict[str, Any]:
|
||||
"""Restore backup contents from a ZIP archive."""
|
||||
|
||||
async with self._lock:
|
||||
try:
|
||||
zf = zipfile.ZipFile(archive_path, mode="r")
|
||||
except zipfile.BadZipFile as exc:
|
||||
raise ValueError("Backup archive is not a valid ZIP file") from exc
|
||||
|
||||
with zf:
|
||||
try:
|
||||
manifest = json.loads(zf.read("manifest.json").decode("utf-8"))
|
||||
except KeyError as exc:
|
||||
raise ValueError("Backup archive is missing manifest.json") from exc
|
||||
|
||||
if not isinstance(manifest, dict):
|
||||
raise ValueError("Backup manifest is invalid")
|
||||
if manifest.get("manifest_version") != BACKUP_MANIFEST_VERSION:
|
||||
raise ValueError("Backup manifest version is not supported")
|
||||
|
||||
files = manifest.get("files", [])
|
||||
if not isinstance(files, list):
|
||||
raise ValueError("Backup manifest file list is invalid")
|
||||
|
||||
extracted_paths: list[tuple[str, str]] = []
|
||||
temp_dir = Path(tempfile.mkdtemp(prefix="lora-manager-restore-"))
|
||||
try:
|
||||
for item in files:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
archive_member = item.get("archive_path")
|
||||
if not isinstance(archive_member, str) or not archive_member:
|
||||
continue
|
||||
archive_member_path = Path(archive_member)
|
||||
if archive_member_path.is_absolute() or ".." in archive_member_path.parts:
|
||||
raise ValueError(f"Invalid archive member path: {archive_member}")
|
||||
|
||||
kind = item.get("kind")
|
||||
target_path = self._resolve_restore_target(kind, archive_member)
|
||||
if target_path is None:
|
||||
continue
|
||||
|
||||
extracted_path = temp_dir / archive_member_path
|
||||
extracted_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with zf.open(archive_member) as source, open(
|
||||
extracted_path, "wb"
|
||||
) as destination:
|
||||
shutil.copyfileobj(source, destination)
|
||||
|
||||
expected_hash = item.get("sha256")
|
||||
if isinstance(expected_hash, str) and expected_hash:
|
||||
actual_hash, _, _ = self._hash_file(str(extracted_path))
|
||||
if actual_hash != expected_hash:
|
||||
raise ValueError(
|
||||
f"Checksum mismatch for {archive_member}"
|
||||
)
|
||||
|
||||
extracted_paths.append((str(extracted_path), target_path))
|
||||
|
||||
for extracted_path, target_path in extracted_paths:
|
||||
os.makedirs(os.path.dirname(target_path), exist_ok=True)
|
||||
os.replace(extracted_path, target_path)
|
||||
finally:
|
||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"restored_files": len(extracted_paths),
|
||||
"snapshot_type": manifest.get("snapshot_type"),
|
||||
}
|
||||
|
||||
def _resolve_restore_target(self, kind: Any, archive_member: str) -> str | None:
|
||||
if kind == "settings":
|
||||
return self._settings_file_path()
|
||||
if kind == "download_history":
|
||||
return self._download_history_path()
|
||||
if kind == "symlink_map":
|
||||
return get_cache_file_path(CacheType.SYMLINK, create_dir=True)
|
||||
if kind == "model_update":
|
||||
filename = os.path.basename(archive_member)
|
||||
return str(Path(get_cache_file_path(CacheType.MODEL_UPDATE, create_dir=True)).parent / filename)
|
||||
return None
|
||||
|
||||
async def create_auto_snapshot_if_due(self) -> Optional[dict[str, Any]]:
|
||||
if not self._get_setting_bool("backup_auto_enabled", True):
|
||||
return None
|
||||
|
||||
latest = self.get_latest_auto_snapshot()
|
||||
now = time.time()
|
||||
if latest and now - latest["mtime"] < DEFAULT_BACKUP_INTERVAL_SECONDS:
|
||||
return None
|
||||
|
||||
return await self.create_snapshot(snapshot_type="auto", persist=True)
|
||||
|
||||
async def _auto_backup_loop(self) -> None:
|
||||
while True:
|
||||
try:
|
||||
await self.create_auto_snapshot_if_due()
|
||||
await asyncio.sleep(DEFAULT_BACKUP_INTERVAL_SECONDS)
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
except Exception as exc: # pragma: no cover - defensive guard
|
||||
logger.warning("Automatic backup snapshot failed: %s", exc, exc_info=True)
|
||||
await asyncio.sleep(60)
|
||||
|
||||
def get_available_snapshots(self) -> list[dict[str, Any]]:
|
||||
snapshots: list[dict[str, Any]] = []
|
||||
for path in sorted(self._backup_dir.glob("lora-manager-backup-*.zip")):
|
||||
try:
|
||||
stat = path.stat()
|
||||
except OSError:
|
||||
continue
|
||||
snapshots.append(
|
||||
{
|
||||
"name": path.name,
|
||||
"path": str(path),
|
||||
"size": stat.st_size,
|
||||
"mtime": stat.st_mtime,
|
||||
"is_auto": path.name.endswith("-auto.zip"),
|
||||
}
|
||||
)
|
||||
snapshots.sort(key=lambda item: item["mtime"], reverse=True)
|
||||
return snapshots
|
||||
|
||||
def get_latest_auto_snapshot(self) -> Optional[dict[str, Any]]:
|
||||
autos = [snapshot for snapshot in self.get_available_snapshots() if snapshot["is_auto"]]
|
||||
if not autos:
|
||||
return None
|
||||
return autos[0]
|
||||
|
||||
def get_status(self) -> dict[str, Any]:
|
||||
snapshots = self.get_available_snapshots()
|
||||
return {
|
||||
"backupDir": self.get_backup_dir(),
|
||||
"enabled": self._get_setting_bool("backup_auto_enabled", True),
|
||||
"retentionCount": self._get_setting_int(
|
||||
"backup_retention_count", DEFAULT_BACKUP_RETENTION_COUNT
|
||||
),
|
||||
"snapshotCount": len(snapshots),
|
||||
"latestSnapshot": snapshots[0] if snapshots else None,
|
||||
"latestAutoSnapshot": self.get_latest_auto_snapshot(),
|
||||
}
|
||||
@@ -20,6 +20,7 @@ from .model_query import (
|
||||
resolve_sub_type,
|
||||
)
|
||||
from .settings_manager import get_settings_manager
|
||||
from ..utils.civitai_utils import build_civitai_model_page_url
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -178,6 +179,57 @@ class BaseModelService(ABC):
|
||||
)
|
||||
return paginated
|
||||
|
||||
async def get_excluded_paginated_data(
|
||||
self,
|
||||
page: int,
|
||||
page_size: int,
|
||||
sort_by: str = "name",
|
||||
search: str = None,
|
||||
fuzzy_search: bool = False,
|
||||
search_options: dict = None,
|
||||
**kwargs,
|
||||
) -> Dict:
|
||||
"""Get paginated excluded model data."""
|
||||
excluded_paths = list(self.scanner.get_excluded_models())
|
||||
excluded_entries: List[Dict[str, Any]] = []
|
||||
stale_paths: List[str] = []
|
||||
|
||||
for file_path in excluded_paths:
|
||||
if not file_path or not os.path.exists(file_path):
|
||||
stale_paths.append(file_path)
|
||||
continue
|
||||
|
||||
entry = await self._build_excluded_entry(file_path)
|
||||
if entry:
|
||||
excluded_entries.append(entry)
|
||||
else:
|
||||
stale_paths.append(file_path)
|
||||
|
||||
if stale_paths:
|
||||
current_excluded = getattr(self.scanner, "_excluded_models", None)
|
||||
if isinstance(current_excluded, list):
|
||||
stale_set = set(stale_paths)
|
||||
self.scanner._excluded_models = [
|
||||
path for path in current_excluded if path not in stale_set
|
||||
]
|
||||
persist_current_cache = getattr(self.scanner, "_persist_current_cache", None)
|
||||
if callable(persist_current_cache):
|
||||
await persist_current_cache()
|
||||
|
||||
excluded_entries = self._sort_entries(excluded_entries, sort_by)
|
||||
|
||||
if search:
|
||||
excluded_entries = await self._apply_search_filters(
|
||||
excluded_entries,
|
||||
search,
|
||||
fuzzy_search,
|
||||
search_options,
|
||||
)
|
||||
|
||||
paginated = self._paginate(excluded_entries, page, page_size)
|
||||
paginated["items"] = await self._annotate_update_flags(paginated["items"])
|
||||
return paginated
|
||||
|
||||
async def _fetch_with_usage_sort(self, sort_params):
|
||||
"""Fetch data sorted by usage count (desc/asc)."""
|
||||
cache = await self.cache_repository.get_cache()
|
||||
@@ -217,6 +269,62 @@ class BaseModelService(ABC):
|
||||
)
|
||||
return annotated
|
||||
|
||||
def _sort_entries(self, data: List[Dict[str, Any]], sort_by: str) -> List[Dict[str, Any]]:
|
||||
sort_params = self.cache_repository.parse_sort(sort_by)
|
||||
key_name = sort_params.key
|
||||
|
||||
if key_name == "date":
|
||||
key_fn = lambda item: (
|
||||
float(item.get("modified", 0.0) or 0.0),
|
||||
(item.get("model_name") or item.get("file_name") or "").lower(),
|
||||
item.get("file_path", "").lower(),
|
||||
)
|
||||
elif key_name == "size":
|
||||
key_fn = lambda item: (
|
||||
int(item.get("size", 0) or 0),
|
||||
(item.get("model_name") or item.get("file_name") or "").lower(),
|
||||
item.get("file_path", "").lower(),
|
||||
)
|
||||
elif key_name == "usage":
|
||||
key_fn = lambda item: (
|
||||
int(item.get("usage_count", 0) or 0),
|
||||
(item.get("model_name") or item.get("file_name") or "").lower(),
|
||||
item.get("file_path", "").lower(),
|
||||
)
|
||||
else:
|
||||
key_fn = lambda item: (
|
||||
(item.get("model_name") or item.get("file_name") or "").lower(),
|
||||
item.get("file_path", "").lower(),
|
||||
)
|
||||
|
||||
return sorted(data, key=key_fn, reverse=sort_params.order == "desc")
|
||||
|
||||
async def _build_excluded_entry(self, file_path: str) -> Optional[Dict[str, Any]]:
|
||||
root_path = self.scanner._find_root_for_file(file_path)
|
||||
if not root_path:
|
||||
return None
|
||||
|
||||
metadata, should_skip = await MetadataManager.load_metadata(
|
||||
file_path,
|
||||
self.metadata_class,
|
||||
)
|
||||
if should_skip:
|
||||
return None
|
||||
|
||||
if metadata is None:
|
||||
metadata = await self.scanner._create_default_metadata(file_path)
|
||||
if metadata is None:
|
||||
return None
|
||||
|
||||
metadata = self.scanner.adjust_metadata(metadata, file_path, root_path)
|
||||
folder = os.path.dirname(os.path.relpath(file_path, root_path)).replace(
|
||||
os.path.sep, "/"
|
||||
)
|
||||
entry = self.scanner._build_cache_entry(metadata, folder=folder)
|
||||
entry = self.scanner.adjust_cached_entry(entry)
|
||||
entry["exclude"] = True
|
||||
return entry
|
||||
|
||||
async def _apply_hash_filters(
|
||||
self, data: List[Dict], hash_filters: Dict
|
||||
) -> List[Dict]:
|
||||
@@ -774,9 +882,12 @@ class BaseModelService(ABC):
|
||||
version_id = civitai_data.get("id")
|
||||
|
||||
if model_id:
|
||||
civitai_url = f"https://civitai.com/models/{model_id}"
|
||||
if version_id:
|
||||
civitai_url += f"?modelVersionId={version_id}"
|
||||
civitai_host = self.settings.get("civitai_host", "civitai.com")
|
||||
civitai_url = build_civitai_model_page_url(
|
||||
model_id,
|
||||
version_id,
|
||||
host=civitai_host,
|
||||
)
|
||||
|
||||
return {
|
||||
"civitai_url": civitai_url,
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
@@ -36,6 +37,9 @@ class CheckpointScanner(ModelScanner):
|
||||
file_extensions=file_extensions,
|
||||
hash_index=ModelHashIndex(),
|
||||
)
|
||||
if not hasattr(self, "_hash_calculation_lock"):
|
||||
self._hash_calculation_lock = asyncio.Lock()
|
||||
self._hash_calculation_tasks: dict[str, asyncio.Task[Optional[str]]] = {}
|
||||
|
||||
async def _create_default_metadata(
|
||||
self, file_path: str
|
||||
@@ -88,7 +92,7 @@ class CheckpointScanner(ModelScanner):
|
||||
return None
|
||||
|
||||
async def calculate_hash_for_model(self, file_path: str) -> Optional[str]:
|
||||
"""Calculate hash for a checkpoint on-demand.
|
||||
"""Calculate hash for a checkpoint on-demand with per-file singleflight.
|
||||
|
||||
Args:
|
||||
file_path: Path to the model file
|
||||
@@ -96,21 +100,78 @@ class CheckpointScanner(ModelScanner):
|
||||
Returns:
|
||||
SHA256 hash string, or None if calculation failed
|
||||
"""
|
||||
from ..utils.file_utils import calculate_sha256
|
||||
|
||||
try:
|
||||
real_path = os.path.realpath(file_path)
|
||||
if not os.path.exists(real_path):
|
||||
logger.error(f"File not found for hash calculation: {file_path}")
|
||||
return None
|
||||
|
||||
# Load current metadata
|
||||
metadata, _ = await MetadataManager.load_metadata(
|
||||
file_path, self.model_class
|
||||
)
|
||||
if (
|
||||
metadata is not None
|
||||
and metadata.hash_status == "completed"
|
||||
and metadata.sha256
|
||||
):
|
||||
return metadata.sha256
|
||||
|
||||
async with self._hash_calculation_lock:
|
||||
metadata, _ = await MetadataManager.load_metadata(
|
||||
file_path, self.model_class
|
||||
)
|
||||
if (
|
||||
metadata is not None
|
||||
and metadata.hash_status == "completed"
|
||||
and metadata.sha256
|
||||
):
|
||||
return metadata.sha256
|
||||
|
||||
task = self._hash_calculation_tasks.get(real_path)
|
||||
if task is None:
|
||||
task = asyncio.create_task(
|
||||
self._run_hash_calculation_task(file_path, real_path)
|
||||
)
|
||||
self._hash_calculation_tasks[real_path] = task
|
||||
|
||||
return await asyncio.shield(task)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error calculating hash for {file_path}: {e}")
|
||||
return None
|
||||
|
||||
async def _run_hash_calculation_task(
|
||||
self, file_path: str, real_path: str
|
||||
) -> Optional[str]:
|
||||
"""Run a hash calculation task and remove it from the in-flight map."""
|
||||
try:
|
||||
return await self._calculate_hash_for_model_uncached(file_path, real_path)
|
||||
finally:
|
||||
task = asyncio.current_task()
|
||||
async with self._hash_calculation_lock:
|
||||
if self._hash_calculation_tasks.get(real_path) is task:
|
||||
del self._hash_calculation_tasks[real_path]
|
||||
|
||||
async def _calculate_hash_for_model_uncached(
|
||||
self, file_path: str, real_path: str
|
||||
) -> Optional[str]:
|
||||
"""Calculate hash for a checkpoint without checking in-flight tasks."""
|
||||
from ..utils.file_utils import calculate_sha256
|
||||
|
||||
try:
|
||||
# Load current metadata
|
||||
metadata, should_skip = await MetadataManager.load_metadata(
|
||||
file_path, self.model_class
|
||||
)
|
||||
if metadata is None:
|
||||
logger.error(f"No metadata found for {file_path}")
|
||||
return None
|
||||
if should_skip:
|
||||
logger.error(f"Invalid metadata found for {file_path}")
|
||||
return None
|
||||
created_metadata = await self._create_default_metadata(file_path)
|
||||
if created_metadata is None:
|
||||
logger.error(f"No metadata found for {file_path}")
|
||||
return None
|
||||
metadata = created_metadata
|
||||
|
||||
# Check if hash is already calculated
|
||||
if metadata.hash_status == "completed" and metadata.sha256:
|
||||
|
||||
@@ -42,6 +42,7 @@ class CheckpointService(BaseModelService):
|
||||
"notes": checkpoint_data.get("notes", ""),
|
||||
"sub_type": sub_type,
|
||||
"favorite": checkpoint_data.get("favorite", False),
|
||||
"exclude": bool(checkpoint_data.get("exclude", False)),
|
||||
"update_available": bool(checkpoint_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(checkpoint_data.get("skip_metadata_refresh", False)),
|
||||
"civitai": self.filter_civitai_data(checkpoint_data.get("civitai", {}), minimal=True)
|
||||
|
||||
430
py/services/civitai_base_model_service.py
Normal file
430
py/services/civitai_base_model_service.py
Normal file
@@ -0,0 +1,430 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, List, Optional, Set, Tuple
|
||||
|
||||
from ..utils.constants import SUPPORTED_DOWNLOAD_SKIP_BASE_MODELS
|
||||
from .downloader import get_downloader
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CivitaiBaseModelService:
|
||||
"""Service for fetching and managing Civitai base models.
|
||||
|
||||
This service provides:
|
||||
- Fetching base models from Civitai API
|
||||
- Caching with TTL (7 days default)
|
||||
- Merging hardcoded and remote base models
|
||||
- Generating abbreviations for new/unknown models
|
||||
"""
|
||||
|
||||
_instance: Optional[CivitaiBaseModelService] = None
|
||||
_lock = asyncio.Lock()
|
||||
|
||||
# Default TTL for cache in seconds (7 days)
|
||||
DEFAULT_CACHE_TTL = 7 * 24 * 60 * 60
|
||||
|
||||
# Civitai API endpoint for enums
|
||||
CIVITAI_ENUMS_URL = "https://civitai.red/api/v1/enums"
|
||||
|
||||
@classmethod
|
||||
async def get_instance(cls) -> CivitaiBaseModelService:
|
||||
"""Get singleton instance of the service."""
|
||||
async with cls._lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = cls()
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the service."""
|
||||
if hasattr(self, "_initialized"):
|
||||
return
|
||||
self._initialized = True
|
||||
|
||||
# Cache storage
|
||||
self._cache: Optional[Dict[str, Any]] = None
|
||||
self._cache_timestamp: Optional[datetime] = None
|
||||
self._cache_ttl = self.DEFAULT_CACHE_TTL
|
||||
|
||||
# Hardcoded models for fallback
|
||||
self._hardcoded_models = set(SUPPORTED_DOWNLOAD_SKIP_BASE_MODELS)
|
||||
|
||||
logger.info("CivitaiBaseModelService initialized")
|
||||
|
||||
async def get_base_models(self, force_refresh: bool = False) -> Dict[str, Any]:
|
||||
"""Get merged base models (hardcoded + remote).
|
||||
|
||||
Args:
|
||||
force_refresh: If True, fetch from API regardless of cache state.
|
||||
|
||||
Returns:
|
||||
Dictionary containing:
|
||||
- models: List of merged base model names
|
||||
- source: 'cache', 'api', or 'fallback'
|
||||
- last_updated: ISO timestamp of last successful API fetch
|
||||
- hardcoded_count: Number of hardcoded models
|
||||
- remote_count: Number of remote models
|
||||
- merged_count: Total unique models
|
||||
"""
|
||||
# Check if cache is valid
|
||||
if not force_refresh and self._is_cache_valid():
|
||||
logger.debug("Returning cached base models")
|
||||
return self._build_response("cache")
|
||||
|
||||
# Try to fetch from API
|
||||
try:
|
||||
remote_models = await self._fetch_from_civitai()
|
||||
if remote_models:
|
||||
self._update_cache(remote_models)
|
||||
return self._build_response("api")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to fetch base models from Civitai: {e}")
|
||||
|
||||
# Fallback to hardcoded models
|
||||
return self._build_response("fallback")
|
||||
|
||||
async def refresh_cache(self) -> Dict[str, Any]:
|
||||
"""Force refresh the cache from Civitai API.
|
||||
|
||||
Returns:
|
||||
Response dict same as get_base_models()
|
||||
"""
|
||||
return await self.get_base_models(force_refresh=True)
|
||||
|
||||
def get_cache_status(self) -> Dict[str, Any]:
|
||||
"""Get current cache status.
|
||||
|
||||
Returns:
|
||||
Dictionary containing:
|
||||
- has_cache: Whether cache exists
|
||||
- last_updated: ISO timestamp or None
|
||||
- is_expired: Whether cache is expired
|
||||
- ttl_seconds: TTL in seconds
|
||||
- age_seconds: Age of cache in seconds (if exists)
|
||||
"""
|
||||
if self._cache is None or self._cache_timestamp is None:
|
||||
return {
|
||||
"has_cache": False,
|
||||
"last_updated": None,
|
||||
"is_expired": True,
|
||||
"ttl_seconds": self._cache_ttl,
|
||||
"age_seconds": None,
|
||||
}
|
||||
|
||||
age = (datetime.now(timezone.utc) - self._cache_timestamp).total_seconds()
|
||||
return {
|
||||
"has_cache": True,
|
||||
"last_updated": self._cache_timestamp.isoformat(),
|
||||
"is_expired": age > self._cache_ttl,
|
||||
"ttl_seconds": self._cache_ttl,
|
||||
"age_seconds": int(age),
|
||||
}
|
||||
|
||||
def generate_abbreviation(self, model_name: str) -> str:
|
||||
"""Generate abbreviation for a base model name.
|
||||
|
||||
Algorithm:
|
||||
1. Extract version patterns (e.g., "2.5" from "Wan Video 2.5")
|
||||
2. Extract main acronym (e.g., "SD" from "SD 1.5")
|
||||
3. Handle special cases (Flux, Wan, etc.)
|
||||
4. Fallback to first letters of words (max 4 chars)
|
||||
|
||||
Args:
|
||||
model_name: Full base model name
|
||||
|
||||
Returns:
|
||||
Generated abbreviation (max 4 characters)
|
||||
"""
|
||||
if not model_name or not isinstance(model_name, str):
|
||||
return "OTH"
|
||||
|
||||
name = model_name.strip()
|
||||
if not name:
|
||||
return "OTH"
|
||||
|
||||
# Check if it's already in hardcoded abbreviations
|
||||
# This is a simplified check - in practice you'd have a mapping
|
||||
lower_name = name.lower()
|
||||
|
||||
# Special cases
|
||||
special_cases = {
|
||||
"sd 1.4": "SD1",
|
||||
"sd 1.5": "SD1",
|
||||
"sd 1.5 lcm": "SD1",
|
||||
"sd 1.5 hyper": "SD1",
|
||||
"sd 2.0": "SD2",
|
||||
"sd 2.1": "SD2",
|
||||
"sd 3": "SD3",
|
||||
"sd 3.5": "SD3",
|
||||
"sd 3.5 medium": "SD3",
|
||||
"sd 3.5 large": "SD3",
|
||||
"sd 3.5 large turbo": "SD3",
|
||||
"sdxl 1.0": "XL",
|
||||
"sdxl lightning": "XL",
|
||||
"sdxl hyper": "XL",
|
||||
"flux.1 d": "F1D",
|
||||
"flux.1 s": "F1S",
|
||||
"flux.1 krea": "F1KR",
|
||||
"flux.1 kontext": "F1KX",
|
||||
"flux.2 d": "F2D",
|
||||
"flux.2 klein 9b": "FK9",
|
||||
"flux.2 klein 9b-base": "FK9B",
|
||||
"flux.2 klein 4b": "FK4",
|
||||
"flux.2 klein 4b-base": "FK4B",
|
||||
"auraflow": "AF",
|
||||
"chroma": "CHR",
|
||||
"pixart a": "PXA",
|
||||
"pixart e": "PXE",
|
||||
"hunyuan 1": "HY",
|
||||
"hunyuan video": "HYV",
|
||||
"lumina": "L",
|
||||
"kolors": "KLR",
|
||||
"noobai": "NAI",
|
||||
"illustrious": "IL",
|
||||
"pony": "PONY",
|
||||
"pony v7": "PNY7",
|
||||
"hidream": "HID",
|
||||
"qwen": "QWEN",
|
||||
"zimageturbo": "ZIT",
|
||||
"zimagebase": "ZIB",
|
||||
"anima": "ANI",
|
||||
"svd": "SVD",
|
||||
"ltxv": "LTXV",
|
||||
"ltxv2": "LTV2",
|
||||
"ltxv 2.3": "LTX",
|
||||
"cogvideox": "CVX",
|
||||
"mochi": "MCHI",
|
||||
"wan video": "WAN",
|
||||
"wan video 1.3b t2v": "WAN",
|
||||
"wan video 14b t2v": "WAN",
|
||||
"wan video 14b i2v 480p": "WAN",
|
||||
"wan video 14b i2v 720p": "WAN",
|
||||
"wan video 2.2 ti2v-5b": "WAN",
|
||||
"wan video 2.2 t2v-a14b": "WAN",
|
||||
"wan video 2.2 i2v-a14b": "WAN",
|
||||
"wan video 2.5 t2v": "WAN",
|
||||
"wan video 2.5 i2v": "WAN",
|
||||
}
|
||||
|
||||
if lower_name in special_cases:
|
||||
return special_cases[lower_name]
|
||||
|
||||
# Try to extract acronym from version pattern
|
||||
# e.g., "Model Name 2.5" -> "MN25"
|
||||
version_match = re.search(r"(\d+(?:\.\d+)?)", name)
|
||||
version = version_match.group(1) if version_match else ""
|
||||
|
||||
# Remove version and common words
|
||||
words = re.sub(r"\d+(?:\.\d+)?", "", name)
|
||||
words = re.sub(
|
||||
r"\b(model|video|diffusion|checkpoint|textualinversion)\b",
|
||||
"",
|
||||
words,
|
||||
flags=re.I,
|
||||
)
|
||||
words = words.strip()
|
||||
|
||||
# Get first letters of remaining words
|
||||
tokens = re.findall(r"[A-Za-z]+", words)
|
||||
if tokens:
|
||||
# Build abbreviation from first letters
|
||||
abbrev = "".join(token[0].upper() for token in tokens)
|
||||
# Add version if present
|
||||
if version:
|
||||
# Clean version (remove dots for abbreviation)
|
||||
version_clean = version.replace(".", "")
|
||||
abbrev = abbrev[: 4 - len(version_clean)] + version_clean
|
||||
return abbrev[:4]
|
||||
|
||||
# Final fallback: just take first 4 alphanumeric chars
|
||||
alphanumeric = re.sub(r"[^A-Za-z0-9]", "", name)
|
||||
if alphanumeric:
|
||||
return alphanumeric[:4].upper()
|
||||
|
||||
return "OTH"
|
||||
|
||||
async def _fetch_from_civitai(self) -> Optional[Set[str]]:
|
||||
"""Fetch base models from Civitai API.
|
||||
|
||||
Returns:
|
||||
Set of base model names, or None if failed
|
||||
"""
|
||||
try:
|
||||
downloader = await get_downloader()
|
||||
success, result = await downloader.make_request(
|
||||
"GET",
|
||||
self.CIVITAI_ENUMS_URL,
|
||||
use_auth=False, # enums endpoint doesn't require auth
|
||||
)
|
||||
|
||||
if not success:
|
||||
logger.warning(f"Failed to fetch enums from Civitai: {result}")
|
||||
return None
|
||||
|
||||
if isinstance(result, str):
|
||||
data = json.loads(result)
|
||||
else:
|
||||
data = result
|
||||
|
||||
# Extract base models from response
|
||||
base_models = set()
|
||||
|
||||
# Use ActiveBaseModel if available (recommended active models)
|
||||
if "ActiveBaseModel" in data:
|
||||
base_models.update(data["ActiveBaseModel"])
|
||||
logger.info(f"Fetched {len(base_models)} models from ActiveBaseModel")
|
||||
# Fallback to full BaseModel list
|
||||
elif "BaseModel" in data:
|
||||
base_models.update(data["BaseModel"])
|
||||
logger.info(f"Fetched {len(base_models)} models from BaseModel")
|
||||
else:
|
||||
logger.warning("No base model data found in Civitai response")
|
||||
return None
|
||||
|
||||
return base_models
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching from Civitai: {e}")
|
||||
return None
|
||||
|
||||
def _update_cache(self, remote_models: Set[str]) -> None:
|
||||
"""Update internal cache with fetched models.
|
||||
|
||||
Args:
|
||||
remote_models: Set of base model names from API
|
||||
"""
|
||||
self._cache = {
|
||||
"remote_models": sorted(remote_models),
|
||||
"hardcoded_models": sorted(self._hardcoded_models),
|
||||
}
|
||||
self._cache_timestamp = datetime.now(timezone.utc)
|
||||
logger.info(f"Cache updated with {len(remote_models)} remote models")
|
||||
|
||||
def _is_cache_valid(self) -> bool:
|
||||
"""Check if current cache is valid (not expired).
|
||||
|
||||
Returns:
|
||||
True if cache exists and is not expired
|
||||
"""
|
||||
if self._cache is None or self._cache_timestamp is None:
|
||||
return False
|
||||
|
||||
age = (datetime.now(timezone.utc) - self._cache_timestamp).total_seconds()
|
||||
return age <= self._cache_ttl
|
||||
|
||||
def _build_response(self, source: str) -> Dict[str, Any]:
|
||||
"""Build response dictionary.
|
||||
|
||||
Args:
|
||||
source: 'cache', 'api', or 'fallback'
|
||||
|
||||
Returns:
|
||||
Response dictionary
|
||||
"""
|
||||
if source == "fallback" or self._cache is None:
|
||||
# Use only hardcoded models
|
||||
merged = sorted(self._hardcoded_models)
|
||||
return {
|
||||
"models": merged,
|
||||
"source": source,
|
||||
"last_updated": None,
|
||||
"hardcoded_count": len(self._hardcoded_models),
|
||||
"remote_count": 0,
|
||||
"merged_count": len(merged),
|
||||
}
|
||||
|
||||
# Merge hardcoded and remote models
|
||||
remote_set = set(self._cache.get("remote_models", []))
|
||||
merged = sorted(self._hardcoded_models | remote_set)
|
||||
|
||||
return {
|
||||
"models": merged,
|
||||
"source": source,
|
||||
"last_updated": self._cache_timestamp.isoformat()
|
||||
if self._cache_timestamp
|
||||
else None,
|
||||
"hardcoded_count": len(self._hardcoded_models),
|
||||
"remote_count": len(remote_set),
|
||||
"merged_count": len(merged),
|
||||
}
|
||||
|
||||
def get_model_categories(self) -> Dict[str, List[str]]:
|
||||
"""Get categorized base models.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping category names to lists of model names
|
||||
"""
|
||||
# Define category patterns
|
||||
categories = {
|
||||
"Stable Diffusion 1.x": ["SD 1.4", "SD 1.5", "SD 1.5 LCM", "SD 1.5 Hyper"],
|
||||
"Stable Diffusion 2.x": ["SD 2.0", "SD 2.1"],
|
||||
"Stable Diffusion 3.x": [
|
||||
"SD 3",
|
||||
"SD 3.5",
|
||||
"SD 3.5 Medium",
|
||||
"SD 3.5 Large",
|
||||
"SD 3.5 Large Turbo",
|
||||
],
|
||||
"SDXL": ["SDXL 1.0", "SDXL Lightning", "SDXL Hyper"],
|
||||
"Flux Models": [
|
||||
"Flux.1 D",
|
||||
"Flux.1 S",
|
||||
"Flux.1 Krea",
|
||||
"Flux.1 Kontext",
|
||||
"Flux.2 D",
|
||||
"Flux.2 Klein 9B",
|
||||
"Flux.2 Klein 9B-base",
|
||||
"Flux.2 Klein 4B",
|
||||
"Flux.2 Klein 4B-base",
|
||||
],
|
||||
"Video Models": [
|
||||
"SVD",
|
||||
"LTXV",
|
||||
"LTXV2",
|
||||
"LTXV 2.3",
|
||||
"CogVideoX",
|
||||
"Mochi",
|
||||
"Hunyuan Video",
|
||||
"Wan Video",
|
||||
"Wan Video 1.3B t2v",
|
||||
"Wan Video 14B t2v",
|
||||
"Wan Video 14B i2v 480p",
|
||||
"Wan Video 14B i2v 720p",
|
||||
"Wan Video 2.2 TI2V-5B",
|
||||
"Wan Video 2.2 T2V-A14B",
|
||||
"Wan Video 2.2 I2V-A14B",
|
||||
"Wan Video 2.5 T2V",
|
||||
"Wan Video 2.5 I2V",
|
||||
],
|
||||
"Other Models": [
|
||||
"Illustrious",
|
||||
"Pony",
|
||||
"Pony V7",
|
||||
"HiDream",
|
||||
"Qwen",
|
||||
"AuraFlow",
|
||||
"Chroma",
|
||||
"ZImageTurbo",
|
||||
"ZImageBase",
|
||||
"PixArt a",
|
||||
"PixArt E",
|
||||
"Hunyuan 1",
|
||||
"Lumina",
|
||||
"Kolors",
|
||||
"NoobAI",
|
||||
"Anima",
|
||||
],
|
||||
}
|
||||
|
||||
return categories
|
||||
|
||||
|
||||
# Convenience function for getting the singleton instance
|
||||
async def get_civitai_base_model_service() -> CivitaiBaseModelService:
|
||||
"""Get the singleton instance of CivitaiBaseModelService."""
|
||||
return await CivitaiBaseModelService.get_instance()
|
||||
@@ -3,6 +3,11 @@ import copy
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, Optional, Dict, Tuple, List, Sequence
|
||||
from .connectivity_guard import (
|
||||
OFFLINE_FRIENDLY_MESSAGE,
|
||||
is_expected_offline_error,
|
||||
is_offline_cooldown_error,
|
||||
)
|
||||
from .model_metadata_provider import (
|
||||
CivitaiModelMetadataProvider,
|
||||
ModelMetadataProviderManager,
|
||||
@@ -39,7 +44,10 @@ class CivitaiClient:
|
||||
return
|
||||
self._initialized = True
|
||||
|
||||
self.base_url = "https://civitai.com/api/v1"
|
||||
self.base_url = "https://civitai.red/api/v1"
|
||||
|
||||
def _build_image_info_url(self, image_id: str) -> str:
|
||||
return f"{self.base_url}/images?imageId={image_id}&nsfw=X"
|
||||
|
||||
async def _make_request(
|
||||
self,
|
||||
@@ -62,6 +70,8 @@ class CivitaiClient:
|
||||
if result.provider is None:
|
||||
result.provider = "civitai_api"
|
||||
raise result
|
||||
if not success and is_offline_cooldown_error(result):
|
||||
return False, OFFLINE_FRIENDLY_MESSAGE
|
||||
return success, result
|
||||
|
||||
@staticmethod
|
||||
@@ -121,6 +131,8 @@ class CivitaiClient:
|
||||
)
|
||||
if not success:
|
||||
message = str(version)
|
||||
if is_expected_offline_error(message):
|
||||
return None, OFFLINE_FRIENDLY_MESSAGE
|
||||
if "not found" in message.lower():
|
||||
return None, "Model not found"
|
||||
|
||||
@@ -161,6 +173,9 @@ class CivitaiClient:
|
||||
return True
|
||||
return False
|
||||
except Exception as e:
|
||||
if is_expected_offline_error(str(e)):
|
||||
logger.debug("Preview download skipped due to offline state.")
|
||||
return False
|
||||
logger.error(f"Download Error: {str(e)}")
|
||||
return False
|
||||
|
||||
@@ -190,7 +205,9 @@ class CivitaiClient:
|
||||
"""Get all versions of a model with local availability info"""
|
||||
try:
|
||||
success, result = await self._make_request(
|
||||
"GET", f"{self.base_url}/models/{model_id}", use_auth=True
|
||||
"GET",
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True,
|
||||
)
|
||||
if success:
|
||||
# Also return model type along with versions
|
||||
@@ -202,6 +219,9 @@ class CivitaiClient:
|
||||
message = self._extract_error_message(result)
|
||||
if message and "not found" in message.lower():
|
||||
raise ResourceNotFoundError(f"Resource not found for model {model_id}")
|
||||
if is_expected_offline_error(message):
|
||||
logger.info("Civitai request skipped: %s", OFFLINE_FRIENDLY_MESSAGE)
|
||||
return None
|
||||
if message:
|
||||
raise RuntimeError(message)
|
||||
return None
|
||||
@@ -346,10 +366,14 @@ class CivitaiClient:
|
||||
|
||||
async def _fetch_model_data(self, model_id: int) -> Optional[Dict]:
|
||||
success, data = await self._make_request(
|
||||
"GET", f"{self.base_url}/models/{model_id}", use_auth=True
|
||||
"GET",
|
||||
f"{self.base_url}/models/{model_id}",
|
||||
use_auth=True,
|
||||
)
|
||||
if success:
|
||||
return data
|
||||
if is_expected_offline_error(data):
|
||||
return None
|
||||
logger.warning(f"Failed to fetch model data for model {model_id}")
|
||||
return None
|
||||
|
||||
@@ -358,10 +382,14 @@ class CivitaiClient:
|
||||
return None
|
||||
|
||||
success, version = await self._make_request(
|
||||
"GET", f"{self.base_url}/model-versions/{version_id}", use_auth=True
|
||||
"GET",
|
||||
f"{self.base_url}/model-versions/{version_id}",
|
||||
use_auth=True,
|
||||
)
|
||||
if success:
|
||||
return version
|
||||
if is_expected_offline_error(version):
|
||||
return None
|
||||
|
||||
logger.warning(f"Failed to fetch version by id {version_id}")
|
||||
return None
|
||||
@@ -371,10 +399,14 @@ class CivitaiClient:
|
||||
return None
|
||||
|
||||
success, version = await self._make_request(
|
||||
"GET", f"{self.base_url}/model-versions/by-hash/{model_hash}", use_auth=True
|
||||
"GET",
|
||||
f"{self.base_url}/model-versions/by-hash/{model_hash}",
|
||||
use_auth=True,
|
||||
)
|
||||
if success:
|
||||
return version
|
||||
if is_expected_offline_error(version):
|
||||
return None
|
||||
|
||||
logger.warning(f"Failed to fetch version by hash {model_hash}")
|
||||
return None
|
||||
@@ -453,17 +485,17 @@ class CivitaiClient:
|
||||
try:
|
||||
url = f"{self.base_url}/model-versions/{version_id}"
|
||||
|
||||
logger.debug(f"Resolving DNS for model version info: {url}")
|
||||
logger.debug("Resolving Civitai model version info: %s", url)
|
||||
success, result = await self._make_request("GET", url, use_auth=True)
|
||||
|
||||
if success:
|
||||
logger.debug(
|
||||
f"Successfully fetched model version info for: {version_id}"
|
||||
)
|
||||
logger.debug("Successfully fetched model version info for: %s", version_id)
|
||||
self._remove_comfy_metadata(result)
|
||||
return result, None
|
||||
|
||||
# Handle specific error cases
|
||||
if is_expected_offline_error(result):
|
||||
return None, OFFLINE_FRIENDLY_MESSAGE
|
||||
if "not found" in str(result):
|
||||
error_msg = f"Model not found"
|
||||
logger.warning(f"Model version not found: {version_id} - {error_msg}")
|
||||
@@ -479,48 +511,60 @@ class CivitaiClient:
|
||||
logger.error(error_msg)
|
||||
return None, error_msg
|
||||
|
||||
async def get_image_info(self, image_id: str) -> Optional[Dict]:
|
||||
async def get_image_info(
|
||||
self, image_id: str, source_url: str | None = None
|
||||
) -> Optional[Dict]:
|
||||
"""Fetch image information from Civitai API
|
||||
|
||||
Args:
|
||||
image_id: The Civitai image ID
|
||||
source_url: Original image page URL. Accepted for caller compatibility;
|
||||
API requests always target ``civitai.red``.
|
||||
|
||||
Returns:
|
||||
Optional[Dict]: The image data or None if not found
|
||||
"""
|
||||
try:
|
||||
url = f"{self.base_url}/images?imageId={image_id}&nsfw=X"
|
||||
requested_id = int(image_id)
|
||||
|
||||
logger.debug(f"Fetching image info for ID: {image_id}")
|
||||
url = self._build_image_info_url(image_id)
|
||||
success, result = await self._make_request("GET", url, use_auth=True)
|
||||
|
||||
if success:
|
||||
if result and "items" in result and isinstance(result["items"], list):
|
||||
items = result["items"]
|
||||
|
||||
# First, try to find the item with matching ID
|
||||
for item in items:
|
||||
if isinstance(item, dict) and item.get("id") == requested_id:
|
||||
logger.debug(f"Successfully fetched image info for ID: {image_id}")
|
||||
return item
|
||||
|
||||
# No matching ID found - log warning with details about returned items
|
||||
returned_ids = [
|
||||
item.get("id") for item in items
|
||||
if isinstance(item, dict) and "id" in item
|
||||
]
|
||||
logger.warning(
|
||||
f"CivitAI API returned no matching image for requested ID {image_id}. "
|
||||
f"Returned {len(items)} item(s) with IDs: {returned_ids}. "
|
||||
f"This may indicate the image was deleted, hidden, or there is a database lag."
|
||||
)
|
||||
if not success:
|
||||
if is_expected_offline_error(result):
|
||||
return None
|
||||
|
||||
logger.warning(f"No image found with ID: {image_id}")
|
||||
logger.error(
|
||||
"Failed to fetch image info for ID %s from civitai.red: %s",
|
||||
image_id,
|
||||
result,
|
||||
)
|
||||
return None
|
||||
|
||||
logger.error(f"Failed to fetch image info for ID: {image_id}: {result}")
|
||||
if result and "items" in result and isinstance(result["items"], list):
|
||||
items = result["items"]
|
||||
|
||||
for item in items:
|
||||
if isinstance(item, dict) and item.get("id") == requested_id:
|
||||
logger.debug(
|
||||
"Successfully fetched image info for ID %s from civitai.red",
|
||||
image_id,
|
||||
)
|
||||
return item
|
||||
|
||||
returned_ids = [
|
||||
item.get("id")
|
||||
for item in items
|
||||
if isinstance(item, dict) and "id" in item
|
||||
]
|
||||
|
||||
logger.warning(
|
||||
"CivitAI API returned no matching image for requested ID %s from civitai.red. Returned %d item(s) with IDs: %s. This may indicate the image was deleted, hidden, or there is a database lag.",
|
||||
image_id,
|
||||
len(items),
|
||||
returned_ids,
|
||||
)
|
||||
return None
|
||||
|
||||
logger.warning("No image found with ID: %s", image_id)
|
||||
return None
|
||||
except RateLimitError:
|
||||
raise
|
||||
@@ -539,10 +583,17 @@ class CivitaiClient:
|
||||
return None
|
||||
|
||||
try:
|
||||
url = f"{self.base_url}/models?username={username}"
|
||||
success, result = await self._make_request("GET", url, use_auth=True)
|
||||
success, result = await self._make_request(
|
||||
"GET",
|
||||
f"{self.base_url}/models",
|
||||
use_auth=True,
|
||||
params={"username": username},
|
||||
)
|
||||
|
||||
if not success:
|
||||
if is_expected_offline_error(result):
|
||||
logger.info("User model fetch skipped: %s", OFFLINE_FRIENDLY_MESSAGE)
|
||||
return None
|
||||
logger.error("Failed to fetch models for %s: %s", username, result)
|
||||
return None
|
||||
|
||||
|
||||
204
py/services/connectivity_guard.py
Normal file
204
py/services/connectivity_guard.py
Normal file
@@ -0,0 +1,204 @@
|
||||
"""In-memory connectivity guard to suppress repeated network retries when offline."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import errno
|
||||
import logging
|
||||
import socket
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Any
|
||||
|
||||
import aiohttp
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
OFFLINE_COOLDOWN_ERROR = "offline_cooldown"
|
||||
OFFLINE_FRIENDLY_MESSAGE = "Network offline, will retry automatically later"
|
||||
|
||||
|
||||
def is_offline_cooldown_error(value: Any) -> bool:
|
||||
"""Return True when a response payload represents guard short-circuit."""
|
||||
return isinstance(value, str) and value == OFFLINE_COOLDOWN_ERROR
|
||||
|
||||
|
||||
def is_expected_offline_error(value: Any) -> bool:
|
||||
"""Return True when payload is an expected offline-related result."""
|
||||
if is_offline_cooldown_error(value):
|
||||
return True
|
||||
if not isinstance(value, str):
|
||||
return False
|
||||
normalized = value.lower()
|
||||
return "network offline" in normalized or "offline" in normalized
|
||||
|
||||
|
||||
class ConnectivityGuard:
|
||||
"""Tracks network failures and gates outbound requests during cooldown."""
|
||||
|
||||
_instance: "ConnectivityGuard | None" = None
|
||||
_instance_lock = asyncio.Lock()
|
||||
|
||||
@classmethod
|
||||
async def get_instance(cls) -> "ConnectivityGuard":
|
||||
async with cls._instance_lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = cls()
|
||||
return cls._instance
|
||||
|
||||
def __init__(self) -> None:
|
||||
if hasattr(self, "_initialized"):
|
||||
return
|
||||
self._initialized = True
|
||||
self._default_destination = "__global__"
|
||||
self._destination_states: dict[str, _DestinationState] = {
|
||||
self._default_destination: _DestinationState()
|
||||
}
|
||||
self.base_backoff_seconds = 30
|
||||
self.max_backoff_seconds = 300
|
||||
self.failure_threshold = 3
|
||||
|
||||
@property
|
||||
def online(self) -> bool:
|
||||
return self._state_for_destination(None).online
|
||||
|
||||
@online.setter
|
||||
def online(self, value: bool) -> None:
|
||||
self._state_for_destination(None).online = value
|
||||
|
||||
@property
|
||||
def failure_count(self) -> int:
|
||||
return self._state_for_destination(None).failure_count
|
||||
|
||||
@failure_count.setter
|
||||
def failure_count(self, value: int) -> None:
|
||||
self._state_for_destination(None).failure_count = value
|
||||
|
||||
@property
|
||||
def cooldown_until(self) -> datetime | None:
|
||||
return self._state_for_destination(None).cooldown_until
|
||||
|
||||
@cooldown_until.setter
|
||||
def cooldown_until(self, value: datetime | None) -> None:
|
||||
self._state_for_destination(None).cooldown_until = value
|
||||
|
||||
def _now(self) -> datetime:
|
||||
return datetime.now()
|
||||
|
||||
def _normalize_destination(self, destination: str | None) -> str:
|
||||
if destination is None or not destination.strip():
|
||||
return self._default_destination
|
||||
return destination.lower().strip()
|
||||
|
||||
def _state_for_destination(self, destination: str | None) -> "_DestinationState":
|
||||
destination_key = self._normalize_destination(destination)
|
||||
if destination_key not in self._destination_states:
|
||||
self._destination_states[destination_key] = _DestinationState()
|
||||
return self._destination_states[destination_key]
|
||||
|
||||
def in_cooldown(self, destination: str | None = None) -> bool:
|
||||
state = self._state_for_destination(destination)
|
||||
if state.cooldown_until is None:
|
||||
return False
|
||||
return self._now() < state.cooldown_until
|
||||
|
||||
def cooldown_remaining_seconds(self, destination: str | None = None) -> float:
|
||||
state = self._state_for_destination(destination)
|
||||
if state.cooldown_until is None:
|
||||
return 0.0
|
||||
return max(0.0, (state.cooldown_until - self._now()).total_seconds())
|
||||
|
||||
def should_block_request(self, destination: str | None = None) -> bool:
|
||||
return self.in_cooldown(destination)
|
||||
|
||||
def register_success(self, destination: str | None = None) -> None:
|
||||
destination_key = self._normalize_destination(destination)
|
||||
state = self._state_for_destination(destination_key)
|
||||
was_offline = (not state.online) or state.cooldown_until is not None
|
||||
state.online = True
|
||||
state.failure_count = 0
|
||||
state.cooldown_until = None
|
||||
if was_offline:
|
||||
logger.info(
|
||||
"Connectivity restored for destination '%s'; requests resumed.",
|
||||
destination_key,
|
||||
)
|
||||
|
||||
def register_network_failure(
|
||||
self, exc: Exception, destination: str | None = None
|
||||
) -> None:
|
||||
destination_key = self._normalize_destination(destination)
|
||||
state = self._state_for_destination(destination_key)
|
||||
state.online = False
|
||||
state.failure_count += 1
|
||||
|
||||
if state.failure_count < self.failure_threshold:
|
||||
logger.debug(
|
||||
"Network failure tracked for destination '%s' (%d/%d): %s",
|
||||
destination_key,
|
||||
state.failure_count,
|
||||
self.failure_threshold,
|
||||
exc,
|
||||
)
|
||||
return
|
||||
|
||||
retry_step = state.failure_count - self.failure_threshold
|
||||
backoff = min(
|
||||
self.max_backoff_seconds,
|
||||
self.base_backoff_seconds * (2**retry_step),
|
||||
)
|
||||
should_log_warning = not self.in_cooldown(destination_key)
|
||||
state.cooldown_until = self._now() + timedelta(seconds=backoff)
|
||||
|
||||
if should_log_warning:
|
||||
logger.warning(
|
||||
"Connectivity offline for destination '%s'; enter cooldown for %ss after %d network failures.",
|
||||
destination_key,
|
||||
int(backoff),
|
||||
state.failure_count,
|
||||
)
|
||||
else:
|
||||
logger.debug(
|
||||
"Cooldown still active for destination '%s'; failure_count=%d, backoff=%ss.",
|
||||
destination_key,
|
||||
state.failure_count,
|
||||
int(backoff),
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def is_network_unreachable_error(exc: Exception) -> bool:
|
||||
"""Return whether the exception should count as connectivity failure."""
|
||||
if isinstance(exc, asyncio.CancelledError):
|
||||
return False
|
||||
|
||||
if isinstance(
|
||||
exc,
|
||||
(
|
||||
asyncio.TimeoutError,
|
||||
TimeoutError,
|
||||
ConnectionRefusedError,
|
||||
socket.gaierror,
|
||||
aiohttp.ServerTimeoutError,
|
||||
aiohttp.ConnectionTimeoutError,
|
||||
aiohttp.ClientConnectorError,
|
||||
aiohttp.ClientConnectionError,
|
||||
),
|
||||
):
|
||||
return True
|
||||
|
||||
if isinstance(exc, OSError) and exc.errno in {
|
||||
errno.ENETUNREACH,
|
||||
errno.EHOSTUNREACH,
|
||||
errno.ETIMEDOUT,
|
||||
errno.ECONNREFUSED,
|
||||
}:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
@dataclass
|
||||
class _DestinationState:
|
||||
online: bool = True
|
||||
failure_count: int = 0
|
||||
cooldown_until: datetime | None = None
|
||||
@@ -7,11 +7,13 @@ with category filtering and enriched results including post counts.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
from typing import List, Dict, Any, Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
_EMBEDDED_COMMAND_PATTERN = re.compile(r"\s/\w")
|
||||
class CustomWordsService:
|
||||
"""Service for autocomplete via TagFTSIndex.
|
||||
|
||||
@@ -77,12 +79,28 @@ class CustomWordsService:
|
||||
Returns:
|
||||
List of dicts with tag_name, category, and post_count.
|
||||
"""
|
||||
normalized_search = search_term.strip()
|
||||
if not normalized_search:
|
||||
return []
|
||||
|
||||
# Prompt widgets should only send the active token, but guard against
|
||||
# accidental full-prompt queries reaching the FTS path.
|
||||
if (
|
||||
"__" in normalized_search
|
||||
or "," in normalized_search
|
||||
or ">" in normalized_search
|
||||
or "\n" in normalized_search
|
||||
or "\r" in normalized_search
|
||||
or _EMBEDDED_COMMAND_PATTERN.search(normalized_search)
|
||||
):
|
||||
logger.debug("Skipping prompt-like custom words query: %s", normalized_search)
|
||||
return []
|
||||
|
||||
tag_index = self._get_tag_index()
|
||||
if tag_index is not None:
|
||||
results = tag_index.search(
|
||||
search_term, categories=categories, limit=limit, offset=offset
|
||||
return tag_index.search(
|
||||
normalized_search, categories=categories, limit=limit, offset=offset
|
||||
)
|
||||
return results
|
||||
|
||||
logger.debug("TagFTSIndex not available, returning empty results")
|
||||
return []
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
320
py/services/downloaded_version_history_service.py
Normal file
320
py/services/downloaded_version_history_service.py
Normal file
@@ -0,0 +1,320 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import sqlite3
|
||||
import time
|
||||
from typing import Iterable, Mapping, Optional, Sequence
|
||||
|
||||
from ..utils.cache_paths import get_cache_base_dir
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _normalize_model_type(model_type: str | None) -> Optional[str]:
|
||||
if not isinstance(model_type, str):
|
||||
return None
|
||||
normalized = model_type.strip().lower()
|
||||
if normalized in {"lora", "locon", "dora"}:
|
||||
return "lora"
|
||||
if normalized == "checkpoint":
|
||||
return "checkpoint"
|
||||
if normalized in {"embedding", "textualinversion"}:
|
||||
return "embedding"
|
||||
return None
|
||||
|
||||
|
||||
def _normalize_int(value) -> Optional[int]:
|
||||
try:
|
||||
if value is None:
|
||||
return None
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
|
||||
def _resolve_database_path() -> str:
|
||||
base_dir = get_cache_base_dir(create=True)
|
||||
history_dir = os.path.join(base_dir, "download_history")
|
||||
os.makedirs(history_dir, exist_ok=True)
|
||||
return os.path.join(history_dir, "downloaded_versions.sqlite")
|
||||
|
||||
|
||||
class DownloadedVersionHistoryService:
|
||||
_SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS downloaded_model_versions (
|
||||
model_type TEXT NOT NULL,
|
||||
version_id INTEGER NOT NULL,
|
||||
model_id INTEGER,
|
||||
first_seen_at REAL NOT NULL,
|
||||
last_seen_at REAL NOT NULL,
|
||||
source TEXT NOT NULL,
|
||||
last_file_path TEXT,
|
||||
last_library_name TEXT,
|
||||
is_deleted_override INTEGER NOT NULL DEFAULT 0,
|
||||
PRIMARY KEY (model_type, version_id)
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_downloaded_model_versions_model
|
||||
ON downloaded_model_versions(model_type, model_id);
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: str | None = None, *, settings_manager=None) -> None:
|
||||
self._db_path = db_path or _resolve_database_path()
|
||||
self._settings = settings_manager or get_settings_manager()
|
||||
self._lock = asyncio.Lock()
|
||||
self._conn: sqlite3.Connection | None = None
|
||||
self._schema_initialized = False
|
||||
self._ensure_directory()
|
||||
self._initialize_schema()
|
||||
|
||||
def _ensure_directory(self) -> None:
|
||||
directory = os.path.dirname(self._db_path)
|
||||
if directory:
|
||||
os.makedirs(directory, exist_ok=True)
|
||||
|
||||
def _connect(self) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(self._db_path, check_same_thread=False)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
def _get_conn(self) -> sqlite3.Connection:
|
||||
if self._conn is None:
|
||||
self._conn = sqlite3.connect(self._db_path, check_same_thread=False)
|
||||
self._conn.row_factory = sqlite3.Row
|
||||
return self._conn
|
||||
|
||||
def _initialize_schema(self) -> None:
|
||||
if self._schema_initialized:
|
||||
return
|
||||
with self._connect() as conn:
|
||||
conn.executescript(self._SCHEMA)
|
||||
conn.commit()
|
||||
self._schema_initialized = True
|
||||
|
||||
def get_database_path(self) -> str:
|
||||
return self._db_path
|
||||
|
||||
def _get_active_library_name(self) -> str | None:
|
||||
try:
|
||||
value = self._settings.get_active_library_name()
|
||||
except Exception:
|
||||
return None
|
||||
return value or None
|
||||
|
||||
async def mark_downloaded(
|
||||
self,
|
||||
model_type: str,
|
||||
version_id: int,
|
||||
*,
|
||||
model_id: int | None = None,
|
||||
source: str = "manual",
|
||||
file_path: str | None = None,
|
||||
library_name: str | None = None,
|
||||
) -> None:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
normalized_version_id = _normalize_int(version_id)
|
||||
normalized_model_id = _normalize_int(model_id)
|
||||
if normalized_type is None or normalized_version_id is None:
|
||||
return
|
||||
|
||||
active_library_name = library_name or self._get_active_library_name()
|
||||
timestamp = time.time()
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO downloaded_model_versions (
|
||||
model_type, version_id, model_id, first_seen_at, last_seen_at,
|
||||
source, last_file_path, last_library_name, is_deleted_override
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 0)
|
||||
ON CONFLICT(model_type, version_id) DO UPDATE SET
|
||||
model_id = COALESCE(excluded.model_id, downloaded_model_versions.model_id),
|
||||
last_seen_at = excluded.last_seen_at,
|
||||
source = excluded.source,
|
||||
last_file_path = COALESCE(excluded.last_file_path, downloaded_model_versions.last_file_path),
|
||||
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
|
||||
is_deleted_override = 0
|
||||
""",
|
||||
(
|
||||
normalized_type,
|
||||
normalized_version_id,
|
||||
normalized_model_id,
|
||||
timestamp,
|
||||
timestamp,
|
||||
source,
|
||||
file_path,
|
||||
active_library_name,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
async def mark_downloaded_bulk(
|
||||
self,
|
||||
model_type: str,
|
||||
records: Sequence[Mapping[str, object]],
|
||||
*,
|
||||
source: str = "scan",
|
||||
library_name: str | None = None,
|
||||
) -> None:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
if normalized_type is None or not records:
|
||||
return
|
||||
|
||||
timestamp = time.time()
|
||||
active_library_name = library_name or self._get_active_library_name()
|
||||
payload: list[tuple[object, ...]] = []
|
||||
for record in records:
|
||||
version_id = _normalize_int(record.get("version_id"))
|
||||
if version_id is None:
|
||||
continue
|
||||
payload.append(
|
||||
(
|
||||
normalized_type,
|
||||
version_id,
|
||||
_normalize_int(record.get("model_id")),
|
||||
timestamp,
|
||||
timestamp,
|
||||
source,
|
||||
record.get("file_path"),
|
||||
active_library_name,
|
||||
)
|
||||
)
|
||||
|
||||
if not payload:
|
||||
return
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
conn.executemany(
|
||||
"""
|
||||
INSERT INTO downloaded_model_versions (
|
||||
model_type, version_id, model_id, first_seen_at, last_seen_at,
|
||||
source, last_file_path, last_library_name, is_deleted_override
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, 0)
|
||||
ON CONFLICT(model_type, version_id) DO UPDATE SET
|
||||
model_id = COALESCE(excluded.model_id, downloaded_model_versions.model_id),
|
||||
last_seen_at = excluded.last_seen_at,
|
||||
source = excluded.source,
|
||||
last_file_path = COALESCE(excluded.last_file_path, downloaded_model_versions.last_file_path),
|
||||
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
|
||||
is_deleted_override = 0
|
||||
""",
|
||||
payload,
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
async def mark_not_downloaded(self, model_type: str, version_id: int) -> None:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
normalized_version_id = _normalize_int(version_id)
|
||||
if normalized_type is None or normalized_version_id is None:
|
||||
return
|
||||
|
||||
timestamp = time.time()
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO downloaded_model_versions (
|
||||
model_type, version_id, model_id, first_seen_at, last_seen_at,
|
||||
source, last_file_path, last_library_name, is_deleted_override
|
||||
) VALUES (?, ?, NULL, ?, ?, 'manual', NULL, ?, 1)
|
||||
ON CONFLICT(model_type, version_id) DO UPDATE SET
|
||||
last_seen_at = excluded.last_seen_at,
|
||||
source = excluded.source,
|
||||
last_library_name = COALESCE(excluded.last_library_name, downloaded_model_versions.last_library_name),
|
||||
is_deleted_override = 1
|
||||
""",
|
||||
(
|
||||
normalized_type,
|
||||
normalized_version_id,
|
||||
timestamp,
|
||||
timestamp,
|
||||
self._get_active_library_name(),
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
async def has_been_downloaded(self, model_type: str, version_id: int) -> bool:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
normalized_version_id = _normalize_int(version_id)
|
||||
if normalized_type is None or normalized_version_id is None:
|
||||
return False
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT is_deleted_override
|
||||
FROM downloaded_model_versions
|
||||
WHERE model_type = ? AND version_id = ?
|
||||
""",
|
||||
(normalized_type, normalized_version_id),
|
||||
).fetchone()
|
||||
return bool(row) and not bool(row["is_deleted_override"])
|
||||
|
||||
async def get_downloaded_version_ids(
|
||||
self, model_type: str, model_id: int
|
||||
) -> list[int]:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
normalized_model_id = _normalize_int(model_id)
|
||||
if normalized_type is None or normalized_model_id is None:
|
||||
return []
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT version_id
|
||||
FROM downloaded_model_versions
|
||||
WHERE model_type = ? AND model_id = ? AND is_deleted_override = 0
|
||||
ORDER BY version_id ASC
|
||||
""",
|
||||
(normalized_type, normalized_model_id),
|
||||
).fetchall()
|
||||
return [int(row["version_id"]) for row in rows]
|
||||
|
||||
async def get_downloaded_version_ids_bulk(
|
||||
self, model_type: str, model_ids: Iterable[int]
|
||||
) -> dict[int, set[int]]:
|
||||
normalized_type = _normalize_model_type(model_type)
|
||||
if normalized_type is None:
|
||||
return {}
|
||||
|
||||
normalized_model_ids = sorted(
|
||||
{
|
||||
value
|
||||
for value in (_normalize_int(model_id) for model_id in model_ids)
|
||||
if value is not None
|
||||
}
|
||||
)
|
||||
if not normalized_model_ids:
|
||||
return {}
|
||||
|
||||
placeholders = ", ".join(["?"] * len(normalized_model_ids))
|
||||
params: list[object] = [normalized_type, *normalized_model_ids]
|
||||
|
||||
async with self._lock:
|
||||
conn = self._get_conn()
|
||||
rows = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, version_id
|
||||
FROM downloaded_model_versions
|
||||
WHERE model_type = ?
|
||||
AND model_id IN ({placeholders})
|
||||
AND is_deleted_override = 0
|
||||
""",
|
||||
params,
|
||||
).fetchall()
|
||||
|
||||
result: dict[int, set[int]] = {}
|
||||
for row in rows:
|
||||
model_id = _normalize_int(row["model_id"])
|
||||
version_id = _normalize_int(row["version_id"])
|
||||
if model_id is None or version_id is None:
|
||||
continue
|
||||
result.setdefault(model_id, set()).add(version_id)
|
||||
return result
|
||||
@@ -18,8 +18,14 @@ from collections import deque
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
from email.utils import parsedate_to_datetime
|
||||
from urllib.parse import urlparse
|
||||
from typing import Optional, Dict, Tuple, Callable, Union, Awaitable
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from .connectivity_guard import (
|
||||
OFFLINE_COOLDOWN_ERROR,
|
||||
OFFLINE_FRIENDLY_MESSAGE,
|
||||
ConnectivityGuard,
|
||||
)
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -138,7 +144,7 @@ class Downloader:
|
||||
self.chunk_size = (
|
||||
16 * 1024 * 1024
|
||||
) # 16MB chunks to balance I/O reduction and memory usage
|
||||
self.max_retries = 5
|
||||
self.max_retries = self._resolve_max_retries()
|
||||
self.base_delay = 2.0 # Base delay for exponential backoff
|
||||
self.session_timeout = 300 # 5 minutes
|
||||
self.stall_timeout = self._resolve_stall_timeout()
|
||||
@@ -192,6 +198,18 @@ class Downloader:
|
||||
|
||||
return max(30.0, timeout_value)
|
||||
|
||||
def _resolve_max_retries(self) -> int:
|
||||
"""Determine max retry count from environment while preserving defaults."""
|
||||
default_retries = 5
|
||||
raw_value = os.environ.get("COMFYUI_DOWNLOAD_MAX_RETRIES")
|
||||
|
||||
try:
|
||||
retries = int(raw_value)
|
||||
except (TypeError, ValueError):
|
||||
retries = default_retries
|
||||
|
||||
return max(0, retries)
|
||||
|
||||
def _should_refresh_session(self) -> bool:
|
||||
"""Check if session should be refreshed"""
|
||||
if self._session is None:
|
||||
@@ -334,6 +352,7 @@ class Downloader:
|
||||
logger.info(f"Resuming download from offset {resume_offset} bytes")
|
||||
|
||||
total_size = 0
|
||||
range_redirect_retry_urls: set[str] = set()
|
||||
|
||||
while retry_count <= self.max_retries:
|
||||
try:
|
||||
@@ -372,6 +391,23 @@ class Downloader:
|
||||
if response.status == 200:
|
||||
# Full content response
|
||||
if resume_offset > 0:
|
||||
redirected_url = str(response.url)
|
||||
if (
|
||||
allow_resume
|
||||
and response.history
|
||||
and redirected_url
|
||||
and redirected_url != url
|
||||
and redirected_url not in range_redirect_retry_urls
|
||||
):
|
||||
range_redirect_retry_urls.add(redirected_url)
|
||||
logger.info(
|
||||
"Range request was not honored after redirect; retrying final URL directly: %s",
|
||||
redirected_url,
|
||||
)
|
||||
url = redirected_url
|
||||
response.release()
|
||||
continue
|
||||
|
||||
# Server doesn't support ranges, restart from beginning
|
||||
logger.warning(
|
||||
"Server doesn't support range requests, restarting download"
|
||||
@@ -571,37 +607,53 @@ class Downloader:
|
||||
expected_size = total_size if total_size > 0 else None
|
||||
|
||||
integrity_error: Optional[str] = None
|
||||
resumable_incomplete = False
|
||||
if final_size <= 0:
|
||||
integrity_error = "Downloaded file is empty"
|
||||
elif expected_size is not None and final_size != expected_size:
|
||||
integrity_error = f"File size mismatch. Expected: {expected_size}, Got: {final_size}"
|
||||
resumable_incomplete = (
|
||||
allow_resume
|
||||
and part_path != save_path
|
||||
and final_size > 0
|
||||
and final_size < expected_size
|
||||
)
|
||||
|
||||
if integrity_error is not None:
|
||||
logger.error(
|
||||
log_fn = logger.warning if resumable_incomplete else logger.error
|
||||
log_fn(
|
||||
"Download integrity check failed for %s: %s",
|
||||
save_path,
|
||||
integrity_error,
|
||||
)
|
||||
|
||||
# Remove the corrupted payload so future attempts start fresh
|
||||
if os.path.exists(part_path):
|
||||
try:
|
||||
os.remove(part_path)
|
||||
except OSError as remove_error:
|
||||
logger.warning(
|
||||
"Failed to delete corrupted download %s: %s",
|
||||
part_path,
|
||||
remove_error,
|
||||
)
|
||||
if part_path != save_path and os.path.exists(save_path):
|
||||
try:
|
||||
os.remove(save_path)
|
||||
except OSError as remove_error:
|
||||
logger.warning(
|
||||
"Failed to delete target file %s after integrity error: %s",
|
||||
save_path,
|
||||
remove_error,
|
||||
)
|
||||
if resumable_incomplete:
|
||||
logger.info(
|
||||
"Preserving incomplete download for resume: %s (%s/%s bytes)",
|
||||
part_path,
|
||||
final_size,
|
||||
expected_size,
|
||||
)
|
||||
else:
|
||||
# Remove corrupted payloads that cannot be safely resumed.
|
||||
if os.path.exists(part_path):
|
||||
try:
|
||||
os.remove(part_path)
|
||||
except OSError as remove_error:
|
||||
logger.warning(
|
||||
"Failed to delete corrupted download %s: %s",
|
||||
part_path,
|
||||
remove_error,
|
||||
)
|
||||
if part_path != save_path and os.path.exists(save_path):
|
||||
try:
|
||||
os.remove(save_path)
|
||||
except OSError as remove_error:
|
||||
logger.warning(
|
||||
"Failed to delete target file %s after integrity error: %s",
|
||||
save_path,
|
||||
remove_error,
|
||||
)
|
||||
|
||||
retry_count += 1
|
||||
if retry_count <= self.max_retries:
|
||||
@@ -611,8 +663,16 @@ class Downloader:
|
||||
delay,
|
||||
)
|
||||
await asyncio.sleep(delay)
|
||||
resume_offset = 0
|
||||
total_size = 0
|
||||
if resumable_incomplete and os.path.exists(part_path):
|
||||
resume_offset = os.path.getsize(part_path)
|
||||
total_size = expected_size or 0
|
||||
logger.info(
|
||||
"Will resume incomplete download from byte %s",
|
||||
resume_offset,
|
||||
)
|
||||
else:
|
||||
resume_offset = 0
|
||||
total_size = 0
|
||||
await self._create_session()
|
||||
continue
|
||||
|
||||
@@ -743,6 +803,11 @@ class Downloader:
|
||||
Returns:
|
||||
Tuple[bool, Union[bytes, str], Optional[Dict]]: (success, content or error message, response headers if requested)
|
||||
"""
|
||||
guard = await ConnectivityGuard.get_instance()
|
||||
destination = self._guard_destination(url)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_FRIENDLY_MESSAGE, None
|
||||
|
||||
try:
|
||||
session = await self.session
|
||||
# Debug log for proxy mode at request time
|
||||
@@ -765,6 +830,7 @@ class Downloader:
|
||||
) as response:
|
||||
if response.status == 200:
|
||||
content = await response.read()
|
||||
guard.register_success(destination)
|
||||
if return_headers:
|
||||
return True, content, dict(response.headers)
|
||||
else:
|
||||
@@ -783,6 +849,12 @@ class Downloader:
|
||||
return False, error_msg, None
|
||||
|
||||
except Exception as e:
|
||||
if guard.is_network_unreachable_error(e):
|
||||
guard.register_network_failure(e, destination)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_FRIENDLY_MESSAGE, None
|
||||
logger.debug("Network unavailable during memory download: %s", e)
|
||||
return False, str(e), None
|
||||
logger.error(f"Error downloading to memory from {url}: {e}")
|
||||
return False, str(e), None
|
||||
|
||||
@@ -803,6 +875,11 @@ class Downloader:
|
||||
Returns:
|
||||
Tuple[bool, Union[Dict, str]]: (success, headers dict or error message)
|
||||
"""
|
||||
guard = await ConnectivityGuard.get_instance()
|
||||
destination = self._guard_destination(url)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_COOLDOWN_ERROR
|
||||
|
||||
try:
|
||||
session = await self.session
|
||||
# Debug log for proxy mode at request time
|
||||
@@ -824,11 +901,18 @@ class Downloader:
|
||||
url, headers=headers, proxy=self.proxy_url
|
||||
) as response:
|
||||
if response.status == 200:
|
||||
guard.register_success(destination)
|
||||
return True, dict(response.headers)
|
||||
else:
|
||||
return False, f"Head request failed with status {response.status}"
|
||||
|
||||
except Exception as e:
|
||||
if guard.is_network_unreachable_error(e):
|
||||
guard.register_network_failure(e, destination)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_COOLDOWN_ERROR
|
||||
logger.debug("Network unavailable during header probe: %s", e)
|
||||
return False, str(e)
|
||||
logger.error(f"Error getting headers from {url}: {e}")
|
||||
return False, str(e)
|
||||
|
||||
@@ -853,6 +937,11 @@ class Downloader:
|
||||
Returns:
|
||||
Tuple[bool, Union[Dict, str]]: (success, response data or error message)
|
||||
"""
|
||||
guard = await ConnectivityGuard.get_instance()
|
||||
destination = self._guard_destination(url)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_COOLDOWN_ERROR
|
||||
|
||||
try:
|
||||
session = await self.session
|
||||
# Debug log for proxy mode at request time
|
||||
@@ -876,6 +965,7 @@ class Downloader:
|
||||
method, url, headers=headers, **kwargs
|
||||
) as response:
|
||||
if response.status == 200:
|
||||
guard.register_success(destination)
|
||||
# Try to parse as JSON, fall back to text
|
||||
try:
|
||||
data = await response.json()
|
||||
@@ -906,6 +996,12 @@ class Downloader:
|
||||
return False, f"Request failed with status {response.status}"
|
||||
|
||||
except Exception as e:
|
||||
if guard.is_network_unreachable_error(e):
|
||||
guard.register_network_failure(e, destination)
|
||||
if guard.should_block_request(destination):
|
||||
return False, OFFLINE_COOLDOWN_ERROR
|
||||
logger.debug("Network unavailable for %s %s: %s", method, url, e)
|
||||
return False, str(e)
|
||||
logger.error(f"Error making {method} request to {url}: {e}")
|
||||
return False, str(e)
|
||||
|
||||
@@ -956,6 +1052,14 @@ class Downloader:
|
||||
delta = retry_datetime - datetime.now(tz=retry_datetime.tzinfo)
|
||||
return max(0.0, delta.total_seconds())
|
||||
|
||||
@staticmethod
|
||||
def _guard_destination(url: str) -> str:
|
||||
"""Build per-destination connectivity guard scope from request URL."""
|
||||
parsed_url = urlparse(url)
|
||||
if parsed_url.hostname:
|
||||
return parsed_url.hostname.lower()
|
||||
return "unknown"
|
||||
|
||||
|
||||
# Global instance accessor
|
||||
async def get_downloader() -> Downloader:
|
||||
|
||||
@@ -42,6 +42,7 @@ class EmbeddingService(BaseModelService):
|
||||
"notes": embedding_data.get("notes", ""),
|
||||
"sub_type": sub_type,
|
||||
"favorite": embedding_data.get("favorite", False),
|
||||
"exclude": bool(embedding_data.get("exclude", False)),
|
||||
"update_available": bool(embedding_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(embedding_data.get("skip_metadata_refresh", False)),
|
||||
"civitai": self.filter_civitai_data(embedding_data.get("civitai", {}), minimal=True)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import os
|
||||
import logging
|
||||
import json
|
||||
import os
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from .base_model_service import BaseModelService
|
||||
@@ -47,6 +48,7 @@ class LoraService(BaseModelService):
|
||||
"usage_tips": lora_data.get("usage_tips", ""),
|
||||
"notes": lora_data.get("notes", ""),
|
||||
"favorite": lora_data.get("favorite", False),
|
||||
"exclude": bool(lora_data.get("exclude", False)),
|
||||
"update_available": bool(lora_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(
|
||||
lora_data.get("skip_metadata_refresh", False)
|
||||
@@ -278,6 +280,42 @@ class LoraService(BaseModelService):
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_recommended_strength_from_lora_data(lora_data: Dict) -> Optional[float]:
|
||||
"""Parse usage_tips JSON and extract recommended model strength."""
|
||||
try:
|
||||
usage_tips = lora_data.get("usage_tips", "")
|
||||
if not usage_tips:
|
||||
return None
|
||||
tips_data = json.loads(usage_tips)
|
||||
return tips_data.get("strength")
|
||||
except (json.JSONDecodeError, TypeError, AttributeError):
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_recommended_clip_strength_from_lora_data(
|
||||
lora_data: Dict,
|
||||
) -> Optional[float]:
|
||||
"""Parse usage_tips JSON and extract recommended clip strength."""
|
||||
try:
|
||||
usage_tips = lora_data.get("usage_tips", "")
|
||||
if not usage_tips:
|
||||
return None
|
||||
tips_data = json.loads(usage_tips)
|
||||
return tips_data.get("clipStrength")
|
||||
except (json.JSONDecodeError, TypeError, AttributeError):
|
||||
return None
|
||||
|
||||
async def get_lora_metadata_by_filename(self, filename: str) -> Optional[Dict]:
|
||||
"""Return cached raw metadata for a LoRA matching the given filename."""
|
||||
cache = await self.scanner.get_cached_data(force_refresh=False)
|
||||
|
||||
for lora in cache.raw_data if cache else []:
|
||||
if lora.get("file_name") == filename:
|
||||
return lora
|
||||
|
||||
return None
|
||||
|
||||
def find_duplicate_hashes(self) -> Dict:
|
||||
"""Find LoRAs with duplicate SHA256 hashes"""
|
||||
return self.scanner._hash_index.get_duplicate_hashes()
|
||||
@@ -328,34 +366,10 @@ class LoraService(BaseModelService):
|
||||
List of LoRA dicts with randomized strengths
|
||||
"""
|
||||
import random
|
||||
import json
|
||||
|
||||
# Use a local Random instance to avoid affecting global random state
|
||||
# This ensures each execution with a different seed produces different results
|
||||
rng = random.Random(seed)
|
||||
|
||||
def get_recommended_strength(lora_data: Dict) -> Optional[float]:
|
||||
"""Parse usage_tips JSON and extract recommended strength"""
|
||||
try:
|
||||
usage_tips = lora_data.get("usage_tips", "")
|
||||
if not usage_tips:
|
||||
return None
|
||||
tips_data = json.loads(usage_tips)
|
||||
return tips_data.get("strength")
|
||||
except (json.JSONDecodeError, TypeError, AttributeError):
|
||||
return None
|
||||
|
||||
def get_recommended_clip_strength(lora_data: Dict) -> Optional[float]:
|
||||
"""Parse usage_tips JSON and extract recommended clip strength"""
|
||||
try:
|
||||
usage_tips = lora_data.get("usage_tips", "")
|
||||
if not usage_tips:
|
||||
return None
|
||||
tips_data = json.loads(usage_tips)
|
||||
return tips_data.get("clipStrength")
|
||||
except (json.JSONDecodeError, TypeError, AttributeError):
|
||||
return None
|
||||
|
||||
if locked_loras is None:
|
||||
locked_loras = []
|
||||
|
||||
@@ -403,7 +417,9 @@ class LoraService(BaseModelService):
|
||||
result_loras = []
|
||||
for lora in selected:
|
||||
if use_recommended_strength:
|
||||
recommended_strength = get_recommended_strength(lora)
|
||||
recommended_strength = self.get_recommended_strength_from_lora_data(
|
||||
lora
|
||||
)
|
||||
if recommended_strength is not None:
|
||||
scale = rng.uniform(
|
||||
recommended_strength_scale_min, recommended_strength_scale_max
|
||||
@@ -421,7 +437,9 @@ class LoraService(BaseModelService):
|
||||
if use_same_clip_strength:
|
||||
clip_str = model_str
|
||||
elif use_recommended_strength:
|
||||
recommended_clip_strength = get_recommended_clip_strength(lora)
|
||||
recommended_clip_strength = (
|
||||
self.get_recommended_clip_strength_from_lora_data(lora)
|
||||
)
|
||||
if recommended_clip_strength is not None:
|
||||
scale = rng.uniform(
|
||||
recommended_strength_scale_min, recommended_strength_scale_max
|
||||
|
||||
@@ -11,6 +11,7 @@ from typing import Any, Awaitable, Callable, Dict, Iterable, Optional
|
||||
from ..services.settings_manager import SettingsManager
|
||||
from ..utils.civitai_utils import resolve_license_payload
|
||||
from ..utils.model_utils import determine_base_model
|
||||
from .connectivity_guard import OFFLINE_FRIENDLY_MESSAGE, is_expected_offline_error
|
||||
from .errors import RateLimitError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -274,11 +275,18 @@ class MetadataSyncService:
|
||||
else "No provider returned metadata"
|
||||
)
|
||||
|
||||
resolved_error = last_error or default_error
|
||||
if is_expected_offline_error(resolved_error):
|
||||
resolved_error = OFFLINE_FRIENDLY_MESSAGE
|
||||
|
||||
error_msg = (
|
||||
f"Error fetching metadata: {last_error or default_error} "
|
||||
f"Error fetching metadata: {resolved_error} "
|
||||
f"(model_name={model_data.get('model_name', '')})"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
if is_expected_offline_error(resolved_error):
|
||||
logger.info(error_msg)
|
||||
else:
|
||||
logger.error(error_msg)
|
||||
return False, error_msg
|
||||
|
||||
model_data["from_civitai"] = True
|
||||
@@ -347,6 +355,9 @@ class MetadataSyncService:
|
||||
return False, error_msg
|
||||
except Exception as exc: # pragma: no cover - error path
|
||||
error_msg = f"Error fetching metadata: {exc}"
|
||||
if is_expected_offline_error(str(exc)):
|
||||
logger.info(OFFLINE_FRIENDLY_MESSAGE)
|
||||
return False, OFFLINE_FRIENDLY_MESSAGE
|
||||
logger.error(error_msg, exc_info=True)
|
||||
return False, error_msg
|
||||
|
||||
|
||||
@@ -79,6 +79,12 @@ class ModelHashIndex:
|
||||
hash_val = h
|
||||
break
|
||||
|
||||
if hash_val is None:
|
||||
for h, paths in self._duplicate_hashes.items():
|
||||
if file_path in paths:
|
||||
hash_val = h
|
||||
break
|
||||
|
||||
# If we didn't find a hash, nothing to do
|
||||
if not hash_val:
|
||||
return
|
||||
|
||||
@@ -8,6 +8,7 @@ from typing import Any, Awaitable, Callable, Dict, Iterable, List, Mapping, Opti
|
||||
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..utils.constants import PREVIEW_EXTENSIONS
|
||||
from ..utils.metadata_manager import MetadataManager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -207,11 +208,56 @@ class ModelLifecycleService:
|
||||
|
||||
excluded = getattr(self._scanner, "_excluded_models", None)
|
||||
if isinstance(excluded, list):
|
||||
excluded.append(file_path)
|
||||
if file_path not in excluded:
|
||||
excluded.append(file_path)
|
||||
|
||||
persist_current_cache = getattr(self._scanner, "_persist_current_cache", None)
|
||||
if callable(persist_current_cache):
|
||||
await persist_current_cache()
|
||||
|
||||
message = f"Model {os.path.basename(file_path)} excluded"
|
||||
return {"success": True, "message": message}
|
||||
|
||||
async def unexclude_model(self, file_path: str) -> Dict[str, object]:
|
||||
"""Restore a previously excluded model to the active cache."""
|
||||
|
||||
if not file_path:
|
||||
raise ValueError("Model path is required")
|
||||
|
||||
if not os.path.exists(file_path):
|
||||
raise ValueError("Model file does not exist")
|
||||
|
||||
metadata_path = os.path.splitext(file_path)[0] + ".metadata.json"
|
||||
metadata_payload = await self._metadata_loader(metadata_path)
|
||||
metadata_payload["exclude"] = False
|
||||
|
||||
await self._metadata_manager.save_metadata(file_path, metadata_payload)
|
||||
|
||||
metadata, should_skip = await MetadataManager.load_metadata(
|
||||
file_path,
|
||||
self._scanner.model_class,
|
||||
)
|
||||
if should_skip:
|
||||
metadata = None
|
||||
if metadata is None:
|
||||
metadata = metadata_payload
|
||||
|
||||
excluded = getattr(self._scanner, "_excluded_models", None)
|
||||
if isinstance(excluded, list):
|
||||
self._scanner._excluded_models = [
|
||||
path for path in excluded if path != file_path
|
||||
]
|
||||
|
||||
await self._scanner.update_single_model_cache(
|
||||
file_path,
|
||||
file_path,
|
||||
metadata,
|
||||
recalculate_type=True,
|
||||
)
|
||||
|
||||
message = f"Model {os.path.basename(file_path)} restored"
|
||||
return {"success": True, "message": message}
|
||||
|
||||
async def bulk_delete_models(self, file_paths: Iterable[str]) -> Dict[str, object]:
|
||||
"""Delete a collection of models via the scanner bulk operation."""
|
||||
|
||||
|
||||
@@ -411,6 +411,7 @@ class ModelScanner:
|
||||
if scan_result:
|
||||
await self._apply_scan_result(scan_result)
|
||||
await self._save_persistent_cache(scan_result)
|
||||
await self._sync_download_history(scan_result.raw_data, source='scan')
|
||||
|
||||
# Send final progress update
|
||||
await ws_manager.broadcast_init_progress({
|
||||
@@ -516,6 +517,7 @@ class ModelScanner:
|
||||
)
|
||||
|
||||
await self._apply_scan_result(scan_result)
|
||||
await self._sync_download_history(adjusted_raw_data, source='scan')
|
||||
|
||||
await ws_manager.broadcast_init_progress({
|
||||
'stage': 'loading_cache',
|
||||
@@ -576,6 +578,7 @@ class ModelScanner:
|
||||
excluded_models=list(self._excluded_models)
|
||||
)
|
||||
await self._save_persistent_cache(snapshot)
|
||||
await self._sync_download_history(snapshot.raw_data, source='scan')
|
||||
def _count_model_files(self) -> int:
|
||||
"""Count all model files with supported extensions in all roots
|
||||
|
||||
@@ -704,6 +707,7 @@ class ModelScanner:
|
||||
scan_result = await self._gather_model_data()
|
||||
await self._apply_scan_result(scan_result)
|
||||
await self._save_persistent_cache(scan_result)
|
||||
await self._sync_download_history(scan_result.raw_data, source='scan')
|
||||
|
||||
logger.info(
|
||||
f"{self.model_type.capitalize()} Scanner: Cache initialization completed in {time.time() - start_time:.2f} seconds, "
|
||||
@@ -732,18 +736,23 @@ class ModelScanner:
|
||||
# Get current cached file paths
|
||||
cached_paths = {item['file_path'] for item in self._cache.raw_data}
|
||||
path_to_item = {item['file_path']: item for item in self._cache.raw_data}
|
||||
cached_real_paths = {}
|
||||
for cached_path in cached_paths:
|
||||
try:
|
||||
cached_real_paths.setdefault(os.path.realpath(cached_path), cached_path)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
# Track found files and new files
|
||||
found_paths = set()
|
||||
new_files = []
|
||||
visited_real_paths = set()
|
||||
discovered_real_files = set()
|
||||
|
||||
# Scan all model roots
|
||||
for root_path in self.get_model_roots():
|
||||
if not os.path.exists(root_path):
|
||||
continue
|
||||
|
||||
# Track visited real paths to avoid symlink loops
|
||||
visited_real_paths = set()
|
||||
|
||||
# Recursively scan directory
|
||||
for root, _, files in os.walk(root_path, followlinks=True):
|
||||
@@ -757,12 +766,18 @@ class ModelScanner:
|
||||
if ext in self.file_extensions:
|
||||
# Construct paths exactly as they would be in cache
|
||||
file_path = os.path.join(root, file).replace(os.sep, '/')
|
||||
real_file_path = os.path.realpath(os.path.join(root, file))
|
||||
|
||||
# Check if this file is already in cache
|
||||
if file_path in cached_paths:
|
||||
found_paths.add(file_path)
|
||||
continue
|
||||
|
||||
cached_real_match = cached_real_paths.get(real_file_path)
|
||||
if cached_real_match:
|
||||
found_paths.add(cached_real_match)
|
||||
continue
|
||||
|
||||
if file_path in self._excluded_models:
|
||||
continue
|
||||
|
||||
@@ -778,6 +793,10 @@ class ModelScanner:
|
||||
if matched:
|
||||
continue
|
||||
|
||||
if real_file_path in discovered_real_files:
|
||||
continue
|
||||
|
||||
discovered_real_files.add(real_file_path)
|
||||
# This is a new file to process
|
||||
new_files.append(file_path)
|
||||
|
||||
@@ -1053,14 +1072,6 @@ class ModelScanner:
|
||||
excluded_models.append(model_data['file_path'])
|
||||
return None
|
||||
|
||||
# Check for duplicate filename before adding to hash index
|
||||
# filename = os.path.splitext(os.path.basename(file_path))[0]
|
||||
# existing_hash = hash_index.get_hash_by_filename(filename)
|
||||
# if existing_hash and existing_hash != model_data.get('sha256', '').lower():
|
||||
# existing_path = hash_index.get_path(existing_hash)
|
||||
# if existing_path and existing_path != file_path:
|
||||
# logger.warning(f"Duplicate filename detected: '{filename}' - files: '{existing_path}' and '{file_path}'")
|
||||
|
||||
return model_data
|
||||
|
||||
async def _apply_scan_result(self, scan_result: CacheBuildResult) -> None:
|
||||
@@ -1086,6 +1097,74 @@ class ModelScanner:
|
||||
|
||||
await self._cache.resort()
|
||||
|
||||
self._log_duplicate_filename_summary()
|
||||
|
||||
def _log_duplicate_filename_summary(self) -> None:
|
||||
"""Log a batched summary of duplicate filename conflicts once per scan."""
|
||||
if self._hash_index is None:
|
||||
return
|
||||
|
||||
duplicates = self._hash_index.get_duplicate_filenames()
|
||||
if not duplicates:
|
||||
return
|
||||
|
||||
total_files = sum(len(paths) for paths in duplicates.values())
|
||||
conflict_count = len(duplicates)
|
||||
model_type_label = self.model_type or "model"
|
||||
|
||||
logger.warning(
|
||||
"Duplicate filename conflict detected: %d %s filename(s) "
|
||||
"are shared by %d files total, causing ambiguity in %s resolution. "
|
||||
"Open the Doctor panel to resolve one-click.",
|
||||
conflict_count,
|
||||
model_type_label,
|
||||
total_files,
|
||||
model_type_label.capitalize(),
|
||||
)
|
||||
|
||||
async def _sync_download_history(
|
||||
self,
|
||||
raw_data: List[Mapping[str, Any]],
|
||||
*,
|
||||
source: str,
|
||||
) -> None:
|
||||
records: List[Dict[str, Any]] = []
|
||||
for item in raw_data or []:
|
||||
if not isinstance(item, Mapping):
|
||||
continue
|
||||
civitai = item.get('civitai')
|
||||
if not isinstance(civitai, Mapping):
|
||||
continue
|
||||
|
||||
version_id = civitai.get('id')
|
||||
if version_id in (None, ''):
|
||||
continue
|
||||
|
||||
records.append(
|
||||
{
|
||||
'version_id': version_id,
|
||||
'model_id': civitai.get('modelId'),
|
||||
'file_path': item.get('file_path'),
|
||||
}
|
||||
)
|
||||
|
||||
if not records:
|
||||
return
|
||||
|
||||
try:
|
||||
history_service = await ServiceRegistry.get_downloaded_version_history_service()
|
||||
await history_service.mark_downloaded_bulk(
|
||||
self.model_type,
|
||||
records,
|
||||
source=source,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug(
|
||||
"%s Scanner: Failed to sync download history: %s",
|
||||
self.model_type.capitalize(),
|
||||
exc,
|
||||
)
|
||||
|
||||
async def _gather_model_data(
|
||||
self,
|
||||
*,
|
||||
@@ -1099,6 +1178,8 @@ class ModelScanner:
|
||||
tags_count: Dict[str, int] = {}
|
||||
excluded_models: List[str] = []
|
||||
processed_files = 0
|
||||
processed_real_files: Set[str] = set()
|
||||
visited_real_dirs: Set[str] = set()
|
||||
|
||||
async def handle_progress() -> None:
|
||||
if progress_callback is None:
|
||||
@@ -1115,9 +1196,10 @@ class ModelScanner:
|
||||
|
||||
try:
|
||||
real_path = os.path.realpath(current_path)
|
||||
if real_path in visited_paths:
|
||||
if real_path in visited_paths or real_path in visited_real_dirs:
|
||||
return
|
||||
visited_paths.add(real_path)
|
||||
visited_real_dirs.add(real_path)
|
||||
|
||||
with os.scandir(current_path) as iterator:
|
||||
entries = list(iterator)
|
||||
@@ -1130,6 +1212,11 @@ class ModelScanner:
|
||||
continue
|
||||
|
||||
file_path = entry.path.replace(os.sep, "/")
|
||||
real_file_path = os.path.realpath(entry.path)
|
||||
if real_file_path in processed_real_files:
|
||||
continue
|
||||
|
||||
processed_real_files.add(real_file_path)
|
||||
result = await self._process_model_file(
|
||||
file_path,
|
||||
root_path,
|
||||
@@ -1465,7 +1552,7 @@ class ModelScanner:
|
||||
return sorted_tags[:limit]
|
||||
|
||||
async def get_base_models(self, limit: int = 20) -> List[Dict[str, any]]:
|
||||
"""Get base models sorted by frequency"""
|
||||
"""Get base models sorted by count. If limit is 0, return all."""
|
||||
cache = await self.get_cached_data()
|
||||
|
||||
base_model_counts = {}
|
||||
@@ -1476,7 +1563,9 @@ class ModelScanner:
|
||||
|
||||
sorted_models = [{'name': model, 'count': count} for model, count in base_model_counts.items()]
|
||||
sorted_models.sort(key=lambda x: x['count'], reverse=True)
|
||||
|
||||
|
||||
if limit == 0:
|
||||
return sorted_models
|
||||
return sorted_models[:limit]
|
||||
|
||||
async def get_model_info_by_name(self, name):
|
||||
|
||||
@@ -12,8 +12,9 @@ from typing import Any, Dict, Iterable, List, Mapping, Optional, Sequence
|
||||
|
||||
from .errors import RateLimitError, ResourceNotFoundError
|
||||
from .settings_manager import get_settings_manager
|
||||
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
|
||||
from ..utils.civitai_utils import rewrite_preview_url
|
||||
from ..utils.preview_selection import select_preview_media
|
||||
from ..utils.preview_selection import resolve_mature_threshold, select_preview_media
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -68,6 +69,7 @@ class ModelVersionRecord:
|
||||
early_access_ends_at: Optional[str] = None
|
||||
sort_index: int = 0
|
||||
is_early_access: bool = False
|
||||
usage_control: Optional[str] = None # "Download", "Generation", "InternalGeneration"
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -100,11 +102,14 @@ class ModelUpdateRecord:
|
||||
|
||||
return [version.version_id for version in self.versions if version.is_in_library]
|
||||
|
||||
def has_update(self, hide_early_access: bool = False) -> bool:
|
||||
def has_update(
|
||||
self, hide_early_access: bool = False, hide_non_downloadable: bool = True
|
||||
) -> bool:
|
||||
"""Return True when a non-ignored remote version newer than the newest local copy is available.
|
||||
|
||||
Args:
|
||||
hide_early_access: If True, exclude early access versions from update check.
|
||||
hide_non_downloadable: If True, exclude versions that don't allow downloads.
|
||||
"""
|
||||
|
||||
if self.should_ignore_model:
|
||||
@@ -120,6 +125,7 @@ class ModelUpdateRecord:
|
||||
not version.is_in_library
|
||||
and not version.should_ignore
|
||||
and not (hide_early_access and ModelUpdateRecord._is_early_access_active(version))
|
||||
and not (hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version))
|
||||
for version in self.versions
|
||||
)
|
||||
|
||||
@@ -128,6 +134,8 @@ class ModelUpdateRecord:
|
||||
continue
|
||||
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
|
||||
continue
|
||||
if hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version):
|
||||
continue
|
||||
if version.version_id > max_in_library:
|
||||
return True
|
||||
return False
|
||||
@@ -154,11 +162,18 @@ class ModelUpdateRecord:
|
||||
# Phase 1: Basic EA flag from bulk API
|
||||
return version.is_early_access
|
||||
|
||||
@staticmethod
|
||||
def _is_downloadable(version: ModelVersionRecord) -> bool:
|
||||
if version.usage_control is None:
|
||||
return True
|
||||
return version.usage_control == "Download"
|
||||
|
||||
def has_update_for_base(
|
||||
self,
|
||||
local_version_id: Optional[int],
|
||||
local_base_model: Optional[str],
|
||||
hide_early_access: bool = False,
|
||||
hide_non_downloadable: bool = True,
|
||||
) -> bool:
|
||||
"""Return True when a newer remote version with the same base model exists.
|
||||
|
||||
@@ -166,6 +181,7 @@ class ModelUpdateRecord:
|
||||
local_version_id: The current local version id.
|
||||
local_base_model: The base model to filter by.
|
||||
hide_early_access: If True, exclude early access versions from update check.
|
||||
hide_non_downloadable: If True, exclude versions that don't allow downloads.
|
||||
"""
|
||||
|
||||
if self.should_ignore_model:
|
||||
@@ -196,6 +212,8 @@ class ModelUpdateRecord:
|
||||
continue
|
||||
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
|
||||
continue
|
||||
if hide_non_downloadable and not ModelUpdateRecord._is_downloadable(version):
|
||||
continue
|
||||
version_base = _normalize_base_model(version.base_model)
|
||||
if version_base != normalized_base:
|
||||
continue
|
||||
@@ -208,6 +226,8 @@ class ModelUpdateRecord:
|
||||
class ModelUpdateService:
|
||||
"""Persist and query remote model version metadata."""
|
||||
|
||||
_SQLITE_MAX_VARIABLES = 500
|
||||
|
||||
_SCHEMA = """
|
||||
PRAGMA foreign_keys = ON;
|
||||
CREATE TABLE IF NOT EXISTS model_update_status (
|
||||
@@ -227,6 +247,7 @@ class ModelUpdateService:
|
||||
preview_url TEXT,
|
||||
is_in_library INTEGER NOT NULL DEFAULT 0,
|
||||
should_ignore INTEGER NOT NULL DEFAULT 0,
|
||||
usage_control TEXT,
|
||||
PRIMARY KEY (model_id, version_id),
|
||||
FOREIGN KEY(model_id) REFERENCES model_update_status(model_id) ON DELETE CASCADE
|
||||
);
|
||||
@@ -234,12 +255,52 @@ class ModelUpdateService:
|
||||
ON model_update_versions(model_id);
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: str, *, ttl_seconds: int = 24 * 60 * 60, settings_manager=None) -> None:
|
||||
self._db_path = db_path
|
||||
def __init__(
|
||||
self,
|
||||
db_path: str | None = None,
|
||||
*,
|
||||
ttl_seconds: int = 24 * 60 * 60,
|
||||
settings_manager=None,
|
||||
) -> None:
|
||||
self._settings = settings_manager or get_settings_manager()
|
||||
self._library_name = self._get_active_library_name()
|
||||
self._db_path = db_path or self._resolve_default_path(self._library_name)
|
||||
self._ttl_seconds = ttl_seconds
|
||||
self._lock = asyncio.Lock()
|
||||
self._schema_initialized = False
|
||||
self._settings = settings_manager or get_settings_manager()
|
||||
self._custom_db_path = db_path is not None
|
||||
self._ensure_directory()
|
||||
self._initialize_schema()
|
||||
|
||||
def _get_active_library_name(self) -> str:
|
||||
try:
|
||||
value = self._settings.get_active_library_name()
|
||||
except Exception:
|
||||
value = None
|
||||
return value or "default"
|
||||
|
||||
def _resolve_default_path(self, library_name: str) -> str:
|
||||
env_override = os.environ.get("LORA_MANAGER_MODEL_UPDATE_DB")
|
||||
return resolve_cache_path_with_migration(
|
||||
CacheType.MODEL_UPDATE,
|
||||
library_name=library_name,
|
||||
env_override=env_override,
|
||||
)
|
||||
|
||||
def on_library_changed(self) -> None:
|
||||
"""Switch to the database for the active library."""
|
||||
|
||||
if self._custom_db_path:
|
||||
return
|
||||
|
||||
library_name = self._get_active_library_name()
|
||||
new_path = self._resolve_default_path(library_name)
|
||||
if new_path == self._db_path:
|
||||
return
|
||||
|
||||
self._library_name = library_name
|
||||
self._db_path = new_path
|
||||
self._schema_initialized = False
|
||||
self._ensure_directory()
|
||||
self._initialize_schema()
|
||||
|
||||
@@ -262,11 +323,114 @@ class ModelUpdateService:
|
||||
conn.execute("PRAGMA foreign_keys = ON")
|
||||
conn.executescript(self._SCHEMA)
|
||||
self._apply_migrations(conn)
|
||||
self._migrate_from_legacy_snapshot(conn)
|
||||
self._schema_initialized = True
|
||||
except Exception as exc: # pragma: no cover - defensive guard
|
||||
logger.error("Failed to initialize update schema: %s", exc, exc_info=True)
|
||||
raise
|
||||
|
||||
def _migrate_from_legacy_snapshot(self, conn: sqlite3.Connection) -> None:
|
||||
"""Copy update tracking data out of the legacy model snapshot database."""
|
||||
|
||||
if self._custom_db_path:
|
||||
return
|
||||
|
||||
try:
|
||||
from .persistent_model_cache import get_persistent_cache
|
||||
|
||||
legacy_path = get_persistent_cache(self._library_name).get_database_path()
|
||||
except Exception:
|
||||
return
|
||||
|
||||
if not legacy_path or os.path.abspath(legacy_path) == os.path.abspath(self._db_path):
|
||||
return
|
||||
if not os.path.exists(legacy_path):
|
||||
return
|
||||
|
||||
try:
|
||||
existing_row = conn.execute(
|
||||
"SELECT 1 FROM model_update_status LIMIT 1"
|
||||
).fetchone()
|
||||
if existing_row:
|
||||
return
|
||||
except Exception:
|
||||
return
|
||||
|
||||
try:
|
||||
with sqlite3.connect(legacy_path, check_same_thread=False) as legacy_conn:
|
||||
legacy_conn.row_factory = sqlite3.Row
|
||||
status_rows = legacy_conn.execute(
|
||||
"""
|
||||
SELECT model_id, model_type, last_checked_at, should_ignore_model
|
||||
FROM model_update_status
|
||||
"""
|
||||
).fetchall()
|
||||
if not status_rows:
|
||||
return
|
||||
|
||||
version_rows = legacy_conn.execute(
|
||||
"""
|
||||
SELECT model_id, version_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore,
|
||||
early_access_ends_at, is_early_access
|
||||
FROM model_update_versions
|
||||
ORDER BY model_id ASC, sort_index ASC, version_id ASC
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
conn.execute("BEGIN")
|
||||
conn.executemany(
|
||||
"""
|
||||
INSERT OR REPLACE INTO model_update_status (
|
||||
model_id, model_type, last_checked_at, should_ignore_model
|
||||
) VALUES (?, ?, ?, ?)
|
||||
""",
|
||||
[
|
||||
(
|
||||
int(row["model_id"]),
|
||||
row["model_type"],
|
||||
row["last_checked_at"],
|
||||
int(row["should_ignore_model"] or 0),
|
||||
)
|
||||
for row in status_rows
|
||||
],
|
||||
)
|
||||
conn.executemany(
|
||||
"""
|
||||
INSERT OR REPLACE INTO model_update_versions (
|
||||
model_id, version_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore,
|
||||
early_access_ends_at, is_early_access
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
[
|
||||
(
|
||||
int(row["model_id"]),
|
||||
int(row["version_id"]),
|
||||
int(row["sort_index"] or 0),
|
||||
row["name"],
|
||||
row["base_model"],
|
||||
row["released_at"],
|
||||
row["size_bytes"],
|
||||
row["preview_url"],
|
||||
int(row["is_in_library"] or 0),
|
||||
int(row["should_ignore"] or 0),
|
||||
row["early_access_ends_at"],
|
||||
int(row["is_early_access"] or 0),
|
||||
)
|
||||
for row in version_rows
|
||||
],
|
||||
)
|
||||
conn.commit()
|
||||
logger.info(
|
||||
"Migrated model update tracking data from legacy snapshot DB for %s",
|
||||
self._library_name,
|
||||
)
|
||||
except sqlite3.OperationalError as exc:
|
||||
logger.debug("Legacy model update migration skipped: %s", exc)
|
||||
except Exception as exc: # pragma: no cover - defensive guard
|
||||
logger.warning("Failed to migrate model update data: %s", exc, exc_info=True)
|
||||
|
||||
def _apply_migrations(self, conn: sqlite3.Connection) -> None:
|
||||
"""Ensure legacy databases match the current schema without dropping data."""
|
||||
|
||||
@@ -319,6 +483,10 @@ class ModelUpdateService:
|
||||
"ALTER TABLE model_update_versions "
|
||||
"ADD COLUMN is_early_access INTEGER NOT NULL DEFAULT 0"
|
||||
),
|
||||
"usage_control": (
|
||||
"ALTER TABLE model_update_versions "
|
||||
"ADD COLUMN usage_control TEXT"
|
||||
),
|
||||
}
|
||||
|
||||
for column, statement in migrations.items():
|
||||
@@ -1191,6 +1359,7 @@ class ModelUpdateService:
|
||||
# Check availability field from bulk API for basic EA detection
|
||||
availability = _normalize_string(entry.get("availability"))
|
||||
is_early_access = availability == "EarlyAccess"
|
||||
usage_control = _normalize_string(entry.get("usageControl"))
|
||||
|
||||
return ModelVersionRecord(
|
||||
version_id=version_id,
|
||||
@@ -1204,6 +1373,7 @@ class ModelUpdateService:
|
||||
early_access_ends_at=early_access_ends_at,
|
||||
sort_index=index,
|
||||
is_early_access=is_early_access,
|
||||
usage_control=usage_control,
|
||||
)
|
||||
|
||||
def _extract_size_bytes(self, files) -> Optional[int]:
|
||||
@@ -1252,14 +1422,23 @@ class ModelUpdateService:
|
||||
return None
|
||||
|
||||
blur_mature_content = True
|
||||
mature_threshold = resolve_mature_threshold({"mature_blur_level": "R"})
|
||||
settings = getattr(self, "_settings", None)
|
||||
if settings is not None and hasattr(settings, "get"):
|
||||
try:
|
||||
blur_mature_content = bool(settings.get("blur_mature_content", True))
|
||||
mature_threshold = resolve_mature_threshold(
|
||||
{"mature_blur_level": settings.get("mature_blur_level", "R")}
|
||||
)
|
||||
except Exception: # pragma: no cover - defensive guard
|
||||
blur_mature_content = True
|
||||
mature_threshold = resolve_mature_threshold({"mature_blur_level": "R"})
|
||||
|
||||
selected, _ = select_preview_media(candidates, blur_mature_content=blur_mature_content)
|
||||
selected, _ = select_preview_media(
|
||||
candidates,
|
||||
blur_mature_content=blur_mature_content,
|
||||
mature_threshold=mature_threshold,
|
||||
)
|
||||
if not selected:
|
||||
return None
|
||||
|
||||
@@ -1286,33 +1465,41 @@ class ModelUpdateService:
|
||||
if not model_ids:
|
||||
return {}
|
||||
|
||||
params = tuple(model_ids)
|
||||
placeholders = ",".join("?" for _ in params)
|
||||
ids = list(model_ids)
|
||||
status_rows: list = []
|
||||
version_rows: list = []
|
||||
|
||||
with self._connect() as conn:
|
||||
status_rows = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, model_type, last_checked_at, should_ignore_model
|
||||
FROM model_update_status
|
||||
WHERE model_id IN ({placeholders})
|
||||
""",
|
||||
params,
|
||||
).fetchall()
|
||||
for start in range(0, len(ids), self._SQLITE_MAX_VARIABLES):
|
||||
chunk = tuple(ids[start : start + self._SQLITE_MAX_VARIABLES])
|
||||
placeholders = ",".join("?" for _ in chunk)
|
||||
|
||||
chunk_status = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, model_type, last_checked_at, should_ignore_model
|
||||
FROM model_update_status
|
||||
WHERE model_id IN ({placeholders})
|
||||
""",
|
||||
chunk,
|
||||
).fetchall()
|
||||
status_rows.extend(chunk_status)
|
||||
|
||||
chunk_versions = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, version_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
|
||||
is_early_access, usage_control
|
||||
FROM model_update_versions
|
||||
WHERE model_id IN ({placeholders})
|
||||
ORDER BY model_id ASC, sort_index ASC, version_id ASC
|
||||
""",
|
||||
chunk,
|
||||
).fetchall()
|
||||
version_rows.extend(chunk_versions)
|
||||
|
||||
if not status_rows:
|
||||
return {}
|
||||
|
||||
version_rows = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, version_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
|
||||
is_early_access
|
||||
FROM model_update_versions
|
||||
WHERE model_id IN ({placeholders})
|
||||
ORDER BY model_id ASC, sort_index ASC, version_id ASC
|
||||
""",
|
||||
params,
|
||||
).fetchall()
|
||||
|
||||
versions_by_model: Dict[int, List[ModelVersionRecord]] = {}
|
||||
for row in version_rows:
|
||||
model_id = int(row["model_id"])
|
||||
@@ -1329,6 +1516,7 @@ class ModelUpdateService:
|
||||
early_access_ends_at=row["early_access_ends_at"],
|
||||
sort_index=_normalize_int(row["sort_index"]) or 0,
|
||||
is_early_access=bool(row["is_early_access"]),
|
||||
usage_control=row["usage_control"],
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1385,8 +1573,8 @@ class ModelUpdateService:
|
||||
INSERT INTO model_update_versions (
|
||||
version_id, model_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
|
||||
is_early_access
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
is_early_access, usage_control
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
version.version_id,
|
||||
@@ -1401,6 +1589,7 @@ class ModelUpdateService:
|
||||
1 if version.should_ignore else 0,
|
||||
version.early_access_ends_at,
|
||||
1 if version.is_early_access else 0,
|
||||
version.usage_control,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
@@ -9,7 +9,7 @@ from urllib.parse import urlparse
|
||||
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH, PREVIEW_EXTENSIONS
|
||||
from ..utils.civitai_utils import rewrite_preview_url
|
||||
from ..utils.preview_selection import select_preview_media
|
||||
from ..utils.preview_selection import resolve_mature_threshold, select_preview_media
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -49,9 +49,13 @@ class PreviewAssetService:
|
||||
blur_mature_content = bool(
|
||||
settings_manager.get("blur_mature_content", True)
|
||||
)
|
||||
mature_threshold = resolve_mature_threshold(
|
||||
{"mature_blur_level": settings_manager.get("mature_blur_level", "R")}
|
||||
)
|
||||
first_preview, nsfw_level = select_preview_media(
|
||||
images,
|
||||
blur_mature_content=blur_mature_content,
|
||||
mature_threshold=mature_threshold,
|
||||
)
|
||||
|
||||
if not first_preview:
|
||||
@@ -216,4 +220,3 @@ class PreviewAssetService:
|
||||
if "webm" in content_type:
|
||||
return ".webm"
|
||||
return ".mp4"
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ from .service_registry import ServiceRegistry
|
||||
from .lora_scanner import LoraScanner
|
||||
from .metadata_service import get_default_metadata_provider
|
||||
from .checkpoint_scanner import CheckpointScanner
|
||||
from .settings_manager import get_settings_manager
|
||||
from .recipes.errors import RecipeNotFoundError
|
||||
from ..utils.utils import calculate_recipe_fingerprint, fuzzy_match
|
||||
from natsort import natsorted
|
||||
@@ -951,6 +952,30 @@ class RecipeScanner:
|
||||
except Exception as exc:
|
||||
logger.debug("Failed to update FTS index for recipe: %s", exc)
|
||||
|
||||
@staticmethod
|
||||
def _normalize_recipe_gen_params(recipe_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Return a recipe copy with normalized generation parameter aliases added."""
|
||||
|
||||
normalized_recipe = dict(recipe_data)
|
||||
gen_params = recipe_data.get("gen_params")
|
||||
if not isinstance(gen_params, dict):
|
||||
return normalized_recipe
|
||||
|
||||
normalized_gen_params = dict(gen_params)
|
||||
for key, value in gen_params.items():
|
||||
if value in (None, ""):
|
||||
continue
|
||||
|
||||
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(key, key)
|
||||
if normalized_key not in GenParamsMerger.ALLOWED_KEYS:
|
||||
continue
|
||||
|
||||
if normalized_gen_params.get(normalized_key) in (None, ""):
|
||||
normalized_gen_params[normalized_key] = value
|
||||
|
||||
normalized_recipe["gen_params"] = normalized_gen_params
|
||||
return normalized_recipe
|
||||
|
||||
async def _enrich_cache_metadata(self) -> None:
|
||||
"""Perform remote metadata enrichment after the initial scan."""
|
||||
|
||||
@@ -1090,6 +1115,14 @@ class RecipeScanner:
|
||||
@property
|
||||
def recipes_dir(self) -> str:
|
||||
"""Get path to recipes directory"""
|
||||
custom_recipes_dir = get_settings_manager().get("recipes_path", "")
|
||||
if isinstance(custom_recipes_dir, str) and custom_recipes_dir.strip():
|
||||
recipes_dir = os.path.abspath(
|
||||
os.path.normpath(os.path.expanduser(custom_recipes_dir.strip()))
|
||||
)
|
||||
os.makedirs(recipes_dir, exist_ok=True)
|
||||
return recipes_dir
|
||||
|
||||
if not config.loras_roots:
|
||||
return ""
|
||||
|
||||
@@ -1336,6 +1369,7 @@ class RecipeScanner:
|
||||
# Ensure gen_params exists
|
||||
if "gen_params" not in recipe_data:
|
||||
recipe_data["gen_params"] = {}
|
||||
recipe_data = self._normalize_recipe_gen_params(recipe_data)
|
||||
|
||||
# Update lora information with local paths and availability
|
||||
lora_metadata_updated = await self._update_lora_information(recipe_data)
|
||||
@@ -1615,6 +1649,9 @@ class RecipeScanner:
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""Coerce legacy or malformed checkpoint entries into a dict."""
|
||||
|
||||
if checkpoint_raw is None:
|
||||
return None
|
||||
|
||||
if isinstance(checkpoint_raw, dict):
|
||||
return dict(checkpoint_raw)
|
||||
|
||||
@@ -1632,9 +1669,6 @@ class RecipeScanner:
|
||||
"file_name": file_name,
|
||||
}
|
||||
|
||||
logger.warning(
|
||||
"Unexpected checkpoint payload type %s", type(checkpoint_raw).__name__
|
||||
)
|
||||
return None
|
||||
|
||||
def _enrich_checkpoint_entry(self, checkpoint: Dict[str, Any]) -> Dict[str, Any]:
|
||||
@@ -1781,6 +1815,15 @@ class RecipeScanner:
|
||||
|
||||
return await self._lora_scanner.get_model_info_by_name(name)
|
||||
|
||||
async def get_local_checkpoint(self, name: str) -> Optional[Dict[str, Any]]:
|
||||
"""Lookup a local checkpoint model by name."""
|
||||
|
||||
checkpoint_scanner = getattr(self, "_checkpoint_scanner", None)
|
||||
if not checkpoint_scanner or not name:
|
||||
return None
|
||||
|
||||
return await checkpoint_scanner.get_model_info_by_name(name)
|
||||
|
||||
async def get_paginated_data(
|
||||
self,
|
||||
page: int,
|
||||
@@ -1790,6 +1833,7 @@ class RecipeScanner:
|
||||
filters: dict = None,
|
||||
search_options: dict = None,
|
||||
lora_hash: str = None,
|
||||
checkpoint_hash: str = None,
|
||||
bypass_filters: bool = True,
|
||||
folder: str | None = None,
|
||||
recursive: bool = True,
|
||||
@@ -1804,7 +1848,8 @@ class RecipeScanner:
|
||||
filters: Dictionary of filters to apply
|
||||
search_options: Dictionary of search options to apply
|
||||
lora_hash: Optional SHA256 hash of a LoRA to filter recipes by
|
||||
bypass_filters: If True, ignore other filters when a lora_hash is provided
|
||||
checkpoint_hash: Optional SHA256 hash of a checkpoint to filter recipes by
|
||||
bypass_filters: If True, ignore other filters when a hash filter is provided
|
||||
folder: Optional folder filter relative to recipes directory
|
||||
recursive: Whether to include recipes in subfolders of the selected folder
|
||||
"""
|
||||
@@ -1852,9 +1897,23 @@ class RecipeScanner:
|
||||
# Skip other filters if bypass_filters is True
|
||||
pass
|
||||
# Otherwise continue with normal filtering after applying LoRA hash filter
|
||||
elif checkpoint_hash:
|
||||
normalized_checkpoint_hash = checkpoint_hash.lower()
|
||||
filtered_data = [
|
||||
item
|
||||
for item in filtered_data
|
||||
if isinstance(item.get("checkpoint"), dict)
|
||||
and (item["checkpoint"].get("hash", "") or "").lower()
|
||||
== normalized_checkpoint_hash
|
||||
]
|
||||
|
||||
# Skip further filtering if we're only filtering by LoRA hash with bypass enabled
|
||||
if not (lora_hash and bypass_filters):
|
||||
if bypass_filters:
|
||||
pass
|
||||
|
||||
has_hash_filter = bool(lora_hash or checkpoint_hash)
|
||||
|
||||
# Skip further filtering if we're only filtering by model hash with bypass enabled
|
||||
if not (has_hash_filter and bypass_filters):
|
||||
# Apply folder filter before other criteria
|
||||
if folder is not None:
|
||||
normalized_folder = folder.strip("/")
|
||||
@@ -2030,7 +2089,10 @@ class RecipeScanner:
|
||||
end_idx = min(start_idx + page_size, total_items)
|
||||
|
||||
# Get paginated items
|
||||
paginated_items = filtered_data[start_idx:end_idx]
|
||||
paginated_items = [
|
||||
self._normalize_recipe_gen_params(item)
|
||||
for item in filtered_data[start_idx:end_idx]
|
||||
]
|
||||
|
||||
# Add inLibrary information and URLs for each recipe
|
||||
for item in paginated_items:
|
||||
@@ -2089,8 +2151,18 @@ class RecipeScanner:
|
||||
if not recipe:
|
||||
return None
|
||||
|
||||
# Prefer the on-disk recipe JSON for fields that are not persisted in the
|
||||
# SQLite cache yet, such as source_path.
|
||||
merged_recipe = self._normalize_recipe_gen_params({**recipe})
|
||||
recipe_json = await self._load_recipe_json(recipe_id)
|
||||
if recipe_json:
|
||||
for field in ("source_path", "checkpoint", "loras", "gen_params"):
|
||||
if field not in recipe_json:
|
||||
merged_recipe.pop(field, None)
|
||||
merged_recipe.update(recipe_json)
|
||||
|
||||
# Format the recipe with all needed information
|
||||
formatted_recipe = {**recipe} # Copy all fields
|
||||
formatted_recipe = {**merged_recipe}
|
||||
|
||||
# Format file path to URL
|
||||
if "file_path" in formatted_recipe:
|
||||
@@ -2124,6 +2196,30 @@ class RecipeScanner:
|
||||
|
||||
return formatted_recipe
|
||||
|
||||
async def _load_recipe_json(self, recipe_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Load the raw recipe JSON payload for a recipe ID if it exists."""
|
||||
|
||||
recipe_json_path = await self.get_recipe_json_path(recipe_id)
|
||||
if not recipe_json_path or not os.path.exists(recipe_json_path):
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(recipe_json_path, "r", encoding="utf-8") as f:
|
||||
recipe_data = json.load(f)
|
||||
except Exception as exc:
|
||||
logger.debug(
|
||||
"Failed to load recipe JSON for %s from %s: %s",
|
||||
recipe_id,
|
||||
recipe_json_path,
|
||||
exc,
|
||||
)
|
||||
return None
|
||||
|
||||
if not isinstance(recipe_data, dict):
|
||||
return None
|
||||
|
||||
return self._normalize_recipe_gen_params(recipe_data)
|
||||
|
||||
def _format_file_url(self, file_path: str) -> str:
|
||||
"""Format file path as URL for serving in web UI"""
|
||||
if not file_path:
|
||||
@@ -2334,6 +2430,38 @@ class RecipeScanner:
|
||||
|
||||
return matching_recipes
|
||||
|
||||
async def get_recipes_for_checkpoint(
|
||||
self, checkpoint_hash: str
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Return recipes that reference a given checkpoint hash."""
|
||||
|
||||
if not checkpoint_hash:
|
||||
return []
|
||||
|
||||
normalized_hash = checkpoint_hash.lower()
|
||||
cache = await self.get_cached_data()
|
||||
matching_recipes: List[Dict[str, Any]] = []
|
||||
|
||||
for recipe in cache.raw_data:
|
||||
checkpoint = self._normalize_checkpoint_entry(recipe.get("checkpoint"))
|
||||
if not checkpoint:
|
||||
continue
|
||||
|
||||
enriched_checkpoint = self._enrich_checkpoint_entry(dict(checkpoint))
|
||||
if (enriched_checkpoint.get("hash") or "").lower() != normalized_hash:
|
||||
continue
|
||||
|
||||
recipe_copy = {**recipe}
|
||||
recipe_copy["checkpoint"] = enriched_checkpoint
|
||||
recipe_copy["loras"] = [
|
||||
self._enrich_lora_entry(dict(entry))
|
||||
for entry in recipe.get("loras", [])
|
||||
]
|
||||
recipe_copy["file_url"] = self._format_file_url(recipe.get("file_path"))
|
||||
matching_recipes.append(recipe_copy)
|
||||
|
||||
return matching_recipes
|
||||
|
||||
async def get_recipe_syntax_tokens(self, recipe_id: str) -> List[str]:
|
||||
"""Build LoRA syntax tokens for a recipe."""
|
||||
|
||||
|
||||
@@ -5,7 +5,6 @@ from __future__ import annotations
|
||||
import base64
|
||||
import io
|
||||
import os
|
||||
import re
|
||||
import tempfile
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Callable, Optional
|
||||
@@ -14,7 +13,7 @@ import numpy as np
|
||||
from PIL import Image
|
||||
|
||||
from ...utils.utils import calculate_recipe_fingerprint
|
||||
from ...utils.civitai_utils import rewrite_preview_url
|
||||
from ...utils.civitai_utils import extract_civitai_image_id, rewrite_preview_url
|
||||
from .errors import (
|
||||
RecipeDownloadError,
|
||||
RecipeNotFoundError,
|
||||
@@ -104,9 +103,11 @@ class RecipeAnalysisService:
|
||||
extension = ".jpg" # Default
|
||||
|
||||
try:
|
||||
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", url)
|
||||
if civitai_match:
|
||||
image_info = await civitai_client.get_image_info(civitai_match.group(1))
|
||||
civitai_image_id = extract_civitai_image_id(url)
|
||||
if civitai_image_id:
|
||||
image_info = await civitai_client.get_image_info(
|
||||
civitai_image_id, source_url=url
|
||||
)
|
||||
if not image_info:
|
||||
raise RecipeDownloadError(
|
||||
"Failed to fetch image information from Civitai"
|
||||
@@ -143,6 +144,12 @@ class RecipeAnalysisService:
|
||||
):
|
||||
metadata = metadata["meta"]
|
||||
|
||||
# Include modelVersionIds from root level if available
|
||||
# Civitai API returns modelVersionIds at root level, not in meta
|
||||
model_version_ids = image_info.get("modelVersionIds")
|
||||
if model_version_ids and isinstance(metadata, dict):
|
||||
metadata["modelVersionIds"] = model_version_ids
|
||||
|
||||
# Validate that metadata contains meaningful recipe fields
|
||||
# If not, treat as None to trigger EXIF extraction from downloaded image
|
||||
if isinstance(metadata, dict) and not self._has_recipe_fields(metadata):
|
||||
|
||||
@@ -12,6 +12,7 @@ from dataclasses import dataclass
|
||||
from typing import Any, Dict, Iterable, Optional
|
||||
|
||||
from ...config import config
|
||||
from ...recipes.constants import GEN_PARAM_KEYS
|
||||
from ...utils.utils import calculate_recipe_fingerprint
|
||||
from .errors import RecipeNotFoundError, RecipeValidationError
|
||||
|
||||
@@ -90,23 +91,7 @@ class RecipePersistenceService:
|
||||
current_time = time.time()
|
||||
loras_data = [self._normalise_lora_entry(lora) for lora in (metadata.get("loras") or [])]
|
||||
checkpoint_entry = self._sanitize_checkpoint_entry(self._extract_checkpoint_entry(metadata))
|
||||
|
||||
gen_params = metadata.get("gen_params") or {}
|
||||
if not gen_params and "raw_metadata" in metadata:
|
||||
raw_metadata = metadata.get("raw_metadata", {})
|
||||
gen_params = {
|
||||
"prompt": raw_metadata.get("prompt", ""),
|
||||
"negative_prompt": raw_metadata.get("negative_prompt", ""),
|
||||
"steps": raw_metadata.get("steps", ""),
|
||||
"sampler": raw_metadata.get("sampler", ""),
|
||||
"cfg_scale": raw_metadata.get("cfg_scale", ""),
|
||||
"seed": raw_metadata.get("seed", ""),
|
||||
"size": raw_metadata.get("size", ""),
|
||||
"clip_skip": raw_metadata.get("clip_skip", ""),
|
||||
}
|
||||
|
||||
# Drop checkpoint duplication from generation parameters to store it only at top level
|
||||
gen_params.pop("checkpoint", None)
|
||||
gen_params = self._sanitize_gen_params_for_storage(metadata)
|
||||
|
||||
fingerprint = calculate_recipe_fingerprint(loras_data)
|
||||
recipe_data: Dict[str, Any] = {
|
||||
@@ -133,6 +118,7 @@ class RecipePersistenceService:
|
||||
json_filename = f"{recipe_id}.recipe.json"
|
||||
json_path = os.path.join(recipes_dir, json_filename)
|
||||
json_path = os.path.normpath(json_path)
|
||||
|
||||
with open(json_path, "w", encoding="utf-8") as file_obj:
|
||||
json.dump(recipe_data, file_obj, indent=4, ensure_ascii=False)
|
||||
|
||||
@@ -152,6 +138,30 @@ class RecipePersistenceService:
|
||||
}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _sanitize_gen_params_for_storage(metadata: dict[str, Any]) -> dict[str, Any]:
|
||||
gen_params = metadata.get("gen_params")
|
||||
if isinstance(gen_params, dict) and gen_params:
|
||||
source = gen_params
|
||||
else:
|
||||
source = metadata.get("raw_metadata")
|
||||
|
||||
if not isinstance(source, dict):
|
||||
return {}
|
||||
|
||||
allowed_keys = set(GEN_PARAM_KEYS)
|
||||
sanitized: dict[str, Any] = {}
|
||||
for key in allowed_keys:
|
||||
if key not in source:
|
||||
continue
|
||||
value = source.get(key)
|
||||
if value in (None, ""):
|
||||
continue
|
||||
sanitized[key] = value
|
||||
|
||||
sanitized.pop("checkpoint", None)
|
||||
return sanitized
|
||||
|
||||
async def delete_recipe(self, *, recipe_scanner, recipe_id: str) -> PersistenceResult:
|
||||
"""Delete an existing recipe."""
|
||||
|
||||
@@ -173,11 +183,23 @@ class RecipePersistenceService:
|
||||
async def update_recipe(self, *, recipe_scanner, recipe_id: str, updates: dict[str, Any]) -> PersistenceResult:
|
||||
"""Update persisted metadata for a recipe."""
|
||||
|
||||
if not any(key in updates for key in ("title", "tags", "source_path", "preview_nsfw_level", "favorite")):
|
||||
allowed_fields = (
|
||||
"title",
|
||||
"tags",
|
||||
"source_path",
|
||||
"preview_nsfw_level",
|
||||
"favorite",
|
||||
"gen_params",
|
||||
)
|
||||
|
||||
if not any(key in updates for key in allowed_fields):
|
||||
raise RecipeValidationError(
|
||||
"At least one field to update must be provided (title or tags or source_path or preview_nsfw_level or favorite)"
|
||||
"At least one field to update must be provided (title or tags or source_path or preview_nsfw_level or favorite or gen_params)"
|
||||
)
|
||||
|
||||
if "gen_params" in updates and not isinstance(updates["gen_params"], dict):
|
||||
raise RecipeValidationError("gen_params must be an object")
|
||||
|
||||
success = await recipe_scanner.update_recipe_metadata(recipe_id, updates)
|
||||
if not success:
|
||||
raise RecipeNotFoundError("Recipe not found or update failed")
|
||||
@@ -486,6 +508,10 @@ class RecipePersistenceService:
|
||||
most_common_base_model = (
|
||||
max(base_model_counts.items(), key=lambda item: item[1])[0] if base_model_counts else ""
|
||||
)
|
||||
checkpoint_entry = await self._build_widget_checkpoint_entry(
|
||||
recipe_scanner,
|
||||
metadata.get("checkpoint"),
|
||||
)
|
||||
|
||||
recipe_data = {
|
||||
"id": recipe_id,
|
||||
@@ -493,9 +519,8 @@ class RecipePersistenceService:
|
||||
"title": recipe_name,
|
||||
"modified": time.time(),
|
||||
"created_date": time.time(),
|
||||
"base_model": most_common_base_model,
|
||||
"base_model": most_common_base_model or (checkpoint_entry or {}).get("baseModel", ""),
|
||||
"loras": loras_data,
|
||||
"checkpoint": self._sanitize_checkpoint_entry(metadata.get("checkpoint", "")),
|
||||
"gen_params": {
|
||||
key: value
|
||||
for key, value in metadata.items()
|
||||
@@ -503,6 +528,8 @@ class RecipePersistenceService:
|
||||
},
|
||||
"loras_stack": lora_stack,
|
||||
}
|
||||
if checkpoint_entry:
|
||||
recipe_data["checkpoint"] = checkpoint_entry
|
||||
|
||||
json_filename = f"{recipe_id}.recipe.json"
|
||||
json_path = os.path.join(recipes_dir, json_filename)
|
||||
@@ -524,6 +551,91 @@ class RecipePersistenceService:
|
||||
|
||||
# Helper methods ---------------------------------------------------
|
||||
|
||||
async def _build_widget_checkpoint_entry(
|
||||
self,
|
||||
recipe_scanner,
|
||||
checkpoint_raw: Any,
|
||||
) -> Optional[dict[str, Any]]:
|
||||
"""Build recipe checkpoint metadata from widget generation metadata."""
|
||||
|
||||
if isinstance(checkpoint_raw, dict):
|
||||
return self._sanitize_checkpoint_entry(checkpoint_raw)
|
||||
|
||||
if not isinstance(checkpoint_raw, str):
|
||||
return None
|
||||
|
||||
checkpoint_name = checkpoint_raw.strip()
|
||||
if not checkpoint_name:
|
||||
return None
|
||||
|
||||
file_name = os.path.splitext(os.path.basename(checkpoint_name))[0]
|
||||
checkpoint_info = await self._lookup_widget_checkpoint(
|
||||
recipe_scanner,
|
||||
checkpoint_name,
|
||||
)
|
||||
if not checkpoint_info:
|
||||
return {
|
||||
"type": "checkpoint",
|
||||
"name": checkpoint_name,
|
||||
"file_name": file_name,
|
||||
"hash": "",
|
||||
}
|
||||
|
||||
civitai = checkpoint_info.get("civitai") or {}
|
||||
civitai_model = civitai.get("model") or {}
|
||||
file_path = checkpoint_info.get("file_path") or checkpoint_info.get("path") or ""
|
||||
cached_file_name = (
|
||||
checkpoint_info.get("file_name")
|
||||
or (os.path.splitext(os.path.basename(file_path))[0] if file_path else "")
|
||||
or file_name
|
||||
)
|
||||
|
||||
return {
|
||||
"type": "checkpoint",
|
||||
"modelId": civitai_model.get("id", 0),
|
||||
"modelVersionId": civitai.get("id", 0),
|
||||
"name": civitai_model.get("name") or checkpoint_info.get("model_name") or checkpoint_name,
|
||||
"version": civitai.get("name", ""),
|
||||
"hash": (checkpoint_info.get("sha256") or checkpoint_info.get("hash") or "").lower(),
|
||||
"file_name": cached_file_name,
|
||||
"modelName": civitai_model.get("name", ""),
|
||||
"modelVersionName": civitai.get("name", ""),
|
||||
"baseModel": checkpoint_info.get("base_model") or civitai.get("baseModel", ""),
|
||||
}
|
||||
|
||||
async def _lookup_widget_checkpoint(
|
||||
self,
|
||||
recipe_scanner,
|
||||
checkpoint_name: str,
|
||||
) -> Optional[dict[str, Any]]:
|
||||
lookup = getattr(recipe_scanner, "get_local_checkpoint", None)
|
||||
if not callable(lookup):
|
||||
return None
|
||||
|
||||
candidates = []
|
||||
for candidate in (
|
||||
checkpoint_name,
|
||||
os.path.basename(checkpoint_name),
|
||||
os.path.splitext(os.path.basename(checkpoint_name))[0],
|
||||
):
|
||||
if candidate and candidate not in candidates:
|
||||
candidates.append(candidate)
|
||||
|
||||
for candidate in candidates:
|
||||
try:
|
||||
checkpoint_info = await lookup(candidate)
|
||||
except Exception as exc:
|
||||
self._logger.debug(
|
||||
"Failed to lookup checkpoint %s while saving widget recipe: %s",
|
||||
candidate,
|
||||
exc,
|
||||
)
|
||||
continue
|
||||
if checkpoint_info:
|
||||
return checkpoint_info
|
||||
|
||||
return None
|
||||
|
||||
def _extract_checkpoint_entry(self, metadata: dict[str, Any]) -> Optional[dict[str, Any]]:
|
||||
"""Pull a checkpoint entry from various metadata locations."""
|
||||
|
||||
|
||||
@@ -159,10 +159,51 @@ class ServiceRegistry:
|
||||
return cls._services[service_name]
|
||||
|
||||
from .model_update_service import ModelUpdateService
|
||||
from .persistent_model_cache import get_persistent_cache
|
||||
from .settings_manager import get_settings_manager
|
||||
|
||||
cache = get_persistent_cache()
|
||||
service = ModelUpdateService(cache.get_database_path())
|
||||
service = ModelUpdateService(settings_manager=get_settings_manager())
|
||||
cls._services[service_name] = service
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return service
|
||||
|
||||
@classmethod
|
||||
async def get_downloaded_version_history_service(cls):
|
||||
"""Get or create the downloaded-version history service."""
|
||||
|
||||
service_name = "downloaded_version_history_service"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
from .downloaded_version_history_service import (
|
||||
DownloadedVersionHistoryService,
|
||||
)
|
||||
|
||||
service = DownloadedVersionHistoryService()
|
||||
cls._services[service_name] = service
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return service
|
||||
|
||||
@classmethod
|
||||
async def get_backup_service(cls):
|
||||
"""Get or create the backup service."""
|
||||
|
||||
service_name = "backup_service"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
from .backup_service import BackupService
|
||||
|
||||
service = await BackupService.get_instance()
|
||||
cls._services[service_name] = service
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return service
|
||||
@@ -255,4 +296,4 @@ class ServiceRegistry:
|
||||
"""Clear all registered services - mainly for testing"""
|
||||
cls._services.clear()
|
||||
cls._locks.clear()
|
||||
logger.info("Cleared all registered services")
|
||||
logger.info("Cleared all registered services")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -450,9 +450,9 @@ class TagFTSIndex:
|
||||
the tag_name, the result will include a "matched_alias" field.
|
||||
|
||||
Ranking is based on a combination of:
|
||||
1. FTS5 bm25 relevance score (how well the text matches)
|
||||
2. Post count (popularity)
|
||||
3. Exact prefix match boost (tag_name starts with query)
|
||||
1. Exact prefix match boost (tag_name starts with query)
|
||||
2. Post count to preserve expected autocomplete ordering
|
||||
3. FTS5 bm25 relevance score as a deterministic tie-breaker
|
||||
|
||||
Args:
|
||||
query: The search query string.
|
||||
@@ -484,65 +484,17 @@ class TagFTSIndex:
|
||||
with self._lock:
|
||||
conn = self._connect(readonly=True)
|
||||
try:
|
||||
# Build the SQL query with bm25 ranking
|
||||
# FTS5 bm25() returns negative scores, lower is better
|
||||
# We use -bm25() to get higher=better scores
|
||||
# Weights: -100.0 for exact matches, 1.0 for others
|
||||
# Add LOG10(post_count) weighting to boost popular tags
|
||||
# Use CASE to boost tag_name prefix matches above alias matches
|
||||
if categories:
|
||||
placeholders = ",".join("?" * len(categories))
|
||||
sql = f"""
|
||||
SELECT t.tag_name, t.category, t.post_count, t.aliases,
|
||||
CASE
|
||||
WHEN t.tag_name LIKE ? ESCAPE '\\' THEN 1
|
||||
ELSE 0
|
||||
END AS is_tag_name_match,
|
||||
bm25(tag_fts, -100.0, 1.0, 1.0) + LOG10(t.post_count + 1) * 10.0 AS rank_score
|
||||
FROM tag_fts
|
||||
JOIN tags t ON tag_fts.rowid = t.rowid
|
||||
WHERE tag_fts.searchable_text MATCH ?
|
||||
AND t.category IN ({placeholders})
|
||||
ORDER BY is_tag_name_match DESC, rank_score DESC
|
||||
LIMIT ? OFFSET ?
|
||||
"""
|
||||
# Escape special LIKE characters and add wildcard
|
||||
query_escaped = (
|
||||
query_lower.lstrip("/")
|
||||
.replace("\\", "\\\\")
|
||||
.replace("%", "\\%")
|
||||
.replace("_", "\\_")
|
||||
)
|
||||
params = (
|
||||
[query_escaped + "%", fts_query]
|
||||
+ categories
|
||||
+ [limit, offset]
|
||||
)
|
||||
else:
|
||||
sql = """
|
||||
SELECT t.tag_name, t.category, t.post_count, t.aliases,
|
||||
CASE
|
||||
WHEN t.tag_name LIKE ? ESCAPE '\\' THEN 1
|
||||
ELSE 0
|
||||
END AS is_tag_name_match,
|
||||
bm25(tag_fts, -100.0, 1.0, 1.0) + LOG10(t.post_count + 1) * 10.0 AS rank_score
|
||||
FROM tag_fts
|
||||
JOIN tags t ON tag_fts.rowid = t.rowid
|
||||
WHERE tag_fts.searchable_text MATCH ?
|
||||
ORDER BY is_tag_name_match DESC, rank_score DESC
|
||||
LIMIT ? OFFSET ?
|
||||
"""
|
||||
query_escaped = (
|
||||
query_lower.lstrip("/")
|
||||
.replace("\\", "\\\\")
|
||||
.replace("%", "\\%")
|
||||
.replace("_", "\\_")
|
||||
)
|
||||
params = [query_escaped + "%", fts_query, limit, offset]
|
||||
|
||||
sql, params = self._build_search_statement(
|
||||
query_lower=query_lower,
|
||||
fts_query=fts_query,
|
||||
categories=categories,
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
)
|
||||
cursor = conn.execute(sql, params)
|
||||
rows = cursor.fetchall()
|
||||
results = []
|
||||
for row in cursor.fetchall():
|
||||
for row in rows:
|
||||
result = {
|
||||
"tag_name": row[0],
|
||||
"category": row[1],
|
||||
@@ -571,6 +523,62 @@ class TagFTSIndex:
|
||||
logger.debug("Tag FTS search error for query '%s': %s", query, exc)
|
||||
return []
|
||||
|
||||
def _build_search_statement(
|
||||
self,
|
||||
query_lower: str,
|
||||
fts_query: str,
|
||||
categories: Optional[List[int]],
|
||||
limit: int,
|
||||
offset: int,
|
||||
) -> tuple[str, list[object]]:
|
||||
"""Build the SQL statement and params for a tag search."""
|
||||
# Escape special LIKE characters and add wildcard
|
||||
query_escaped = (
|
||||
query_lower.lstrip("/")
|
||||
.replace("\\", "\\\\")
|
||||
.replace("%", "\\%")
|
||||
.replace("_", "\\_")
|
||||
)
|
||||
|
||||
# FTS5 bm25() returns negative scores, lower is better.
|
||||
# We use -bm25() to get higher=better scores, but keep post_count as the
|
||||
# primary sort within tag-name prefix matches so autocomplete ordering
|
||||
# remains aligned with the existing popularity-first behavior.
|
||||
if categories:
|
||||
placeholders = ",".join("?" * len(categories))
|
||||
sql = f"""
|
||||
SELECT t.tag_name, t.category, t.post_count, t.aliases,
|
||||
CASE
|
||||
WHEN t.tag_name LIKE ? ESCAPE '\\' THEN 1
|
||||
ELSE 0
|
||||
END AS is_tag_name_match,
|
||||
bm25(tag_fts, -100.0, 1.0, 1.0) AS rank_score
|
||||
FROM tag_fts
|
||||
CROSS JOIN tags t ON t.rowid = tag_fts.rowid
|
||||
WHERE tag_fts.searchable_text MATCH ?
|
||||
AND t.category IN ({placeholders})
|
||||
ORDER BY is_tag_name_match DESC, t.post_count DESC, rank_score DESC
|
||||
LIMIT ? OFFSET ?
|
||||
"""
|
||||
params = [query_escaped + "%", fts_query] + categories + [limit, offset]
|
||||
else:
|
||||
sql = """
|
||||
SELECT t.tag_name, t.category, t.post_count, t.aliases,
|
||||
CASE
|
||||
WHEN t.tag_name LIKE ? ESCAPE '\\' THEN 1
|
||||
ELSE 0
|
||||
END AS is_tag_name_match,
|
||||
bm25(tag_fts, -100.0, 1.0, 1.0) AS rank_score
|
||||
FROM tag_fts
|
||||
JOIN tags t ON tag_fts.rowid = t.rowid
|
||||
WHERE tag_fts.searchable_text MATCH ?
|
||||
ORDER BY is_tag_name_match DESC, t.post_count DESC, rank_score DESC
|
||||
LIMIT ? OFFSET ?
|
||||
"""
|
||||
params = [query_escaped + "%", fts_query, limit, offset]
|
||||
|
||||
return sql, params
|
||||
|
||||
def _find_matched_alias(
|
||||
self, query: str, tag_name: str, aliases_str: str
|
||||
) -> Optional[str]:
|
||||
|
||||
428
py/services/wildcard_service.py
Normal file
428
py/services/wildcard_service.py
Normal file
@@ -0,0 +1,428 @@
|
||||
"""Managed wildcard loading, search, and text expansion."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Optional
|
||||
|
||||
import yaml
|
||||
|
||||
from ..utils.settings_paths import get_settings_dir
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_WILDCARD_PATTERN = re.compile(r"__([\w\s.\-+/*\\]+?)__")
|
||||
_OPTION_PATTERN = re.compile(r"{([^{}]*?)}")
|
||||
_TRIGGER_WORD_PATTERN = re.compile(r"^trigger_words\d+$")
|
||||
_WEIGHTED_OPTION_PATTERN = re.compile(r"^\s*([0-9.]+)::")
|
||||
_NUMERIC_PATTERN = re.compile(r"^-?\d+(\.\d+)?$")
|
||||
|
||||
|
||||
def _normalize_wildcard_key(value: str) -> str:
|
||||
return value.replace("\\", "/").strip("/").lower()
|
||||
|
||||
|
||||
def _is_numeric_string(value: str) -> bool:
|
||||
return bool(_NUMERIC_PATTERN.match(value))
|
||||
|
||||
|
||||
def contains_dynamic_syntax(text: str) -> bool:
|
||||
"""Return True when text contains supported wildcard or option syntax."""
|
||||
|
||||
return isinstance(text, str) and bool(
|
||||
_WILDCARD_PATTERN.search(text) or _OPTION_PATTERN.search(text)
|
||||
)
|
||||
|
||||
|
||||
def get_wildcards_dir(create: bool = False) -> str:
|
||||
"""Return the managed wildcard directory inside the settings folder."""
|
||||
|
||||
settings_dir = get_settings_dir(create=create)
|
||||
wildcards_dir = os.path.join(settings_dir, "wildcards")
|
||||
if create:
|
||||
os.makedirs(wildcards_dir, exist_ok=True)
|
||||
return wildcards_dir
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class WildcardEntry:
|
||||
key: str
|
||||
values_count: int
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class WildcardMetadata:
|
||||
has_wildcards: bool
|
||||
wildcards_dir: str
|
||||
supported_formats: tuple[str, ...]
|
||||
|
||||
|
||||
class WildcardService:
|
||||
"""Discover wildcard keys and expand wildcard syntax."""
|
||||
|
||||
_instance: Optional["WildcardService"] = None
|
||||
|
||||
def __new__(cls) -> "WildcardService":
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
def __init__(self) -> None:
|
||||
if getattr(self, "_initialized", False):
|
||||
return
|
||||
self._initialized = True
|
||||
self._cached_signature: tuple[tuple[str, int, int], ...] | None = None
|
||||
self._wildcard_dict: dict[str, list[str]] = {}
|
||||
|
||||
@classmethod
|
||||
def get_instance(cls) -> "WildcardService":
|
||||
return cls()
|
||||
|
||||
def search_keys(
|
||||
self, search_term: str, limit: int = 20, offset: int = 0
|
||||
) -> list[str]:
|
||||
"""Search wildcard keys for autocomplete."""
|
||||
|
||||
normalized_term = _normalize_wildcard_key(search_term).strip()
|
||||
if not normalized_term:
|
||||
return []
|
||||
|
||||
ranked: list[tuple[int, str]] = []
|
||||
compact_term = normalized_term.replace("/", "")
|
||||
for key in self.get_wildcard_dict().keys():
|
||||
score = self._score_entry(key, normalized_term, compact_term)
|
||||
if score is not None:
|
||||
ranked.append((score, key))
|
||||
|
||||
ranked.sort(key=lambda item: (-item[0], item[1]))
|
||||
keys = [key for _, key in ranked]
|
||||
return keys[offset : offset + limit]
|
||||
|
||||
def expand_text(self, text: str, seed: int | None = None) -> str:
|
||||
"""Expand wildcard and dynamic prompt syntax for a text value."""
|
||||
|
||||
if not isinstance(text, str) or not text:
|
||||
return text
|
||||
|
||||
rng = random.Random(seed) if seed is not None else random.Random()
|
||||
wildcard_dict = self.get_wildcard_dict()
|
||||
if not wildcard_dict:
|
||||
return self._expand_options_only(text, rng)
|
||||
|
||||
current = text
|
||||
remaining_depth = 100
|
||||
|
||||
while remaining_depth > 0:
|
||||
remaining_depth -= 1
|
||||
after_options, options_replaced = self._replace_options(current, rng)
|
||||
current, wildcards_replaced = self._replace_wildcards(
|
||||
after_options, rng, wildcard_dict
|
||||
)
|
||||
if not options_replaced and not wildcards_replaced:
|
||||
break
|
||||
|
||||
return current
|
||||
|
||||
def get_wildcard_dict(self) -> dict[str, list[str]]:
|
||||
signature = self._build_signature()
|
||||
if signature != self._cached_signature:
|
||||
self._wildcard_dict = self._scan_wildcard_dict()
|
||||
self._cached_signature = signature
|
||||
return self._wildcard_dict
|
||||
|
||||
def get_entries(self) -> list[WildcardEntry]:
|
||||
return [
|
||||
WildcardEntry(key=key, values_count=len(values))
|
||||
for key, values in sorted(self.get_wildcard_dict().items())
|
||||
]
|
||||
|
||||
def get_metadata(self, *, create_dir: bool = False) -> WildcardMetadata:
|
||||
wildcards_dir = get_wildcards_dir(create=create_dir)
|
||||
return WildcardMetadata(
|
||||
has_wildcards=bool(self.get_wildcard_dict()),
|
||||
wildcards_dir=wildcards_dir,
|
||||
supported_formats=(".txt", ".yaml", ".yml", ".json"),
|
||||
)
|
||||
|
||||
def _build_signature(self) -> tuple[tuple[str, int, int], ...]:
|
||||
root = get_wildcards_dir(create=False)
|
||||
if not os.path.isdir(root):
|
||||
return ()
|
||||
|
||||
signature: list[tuple[str, int, int]] = []
|
||||
for current_root, _dirs, files in os.walk(root, followlinks=True):
|
||||
for file_name in sorted(files):
|
||||
if not file_name.lower().endswith((".txt", ".yaml", ".yml", ".json")):
|
||||
continue
|
||||
file_path = os.path.join(current_root, file_name)
|
||||
try:
|
||||
stat = os.stat(file_path)
|
||||
except OSError:
|
||||
continue
|
||||
rel_path = os.path.relpath(file_path, root).replace("\\", "/")
|
||||
signature.append((rel_path, int(stat.st_mtime_ns), int(stat.st_size)))
|
||||
signature.sort()
|
||||
return tuple(signature)
|
||||
|
||||
def _scan_wildcard_dict(self) -> dict[str, list[str]]:
|
||||
root = get_wildcards_dir(create=False)
|
||||
if not os.path.isdir(root):
|
||||
return {}
|
||||
|
||||
collected: dict[str, list[str]] = {}
|
||||
for current_root, _dirs, files in os.walk(root, followlinks=True):
|
||||
for file_name in sorted(files):
|
||||
file_path = os.path.join(current_root, file_name)
|
||||
lower_name = file_name.lower()
|
||||
try:
|
||||
if lower_name.endswith(".txt"):
|
||||
rel_path = os.path.relpath(file_path, root)
|
||||
key = _normalize_wildcard_key(os.path.splitext(rel_path)[0])
|
||||
values = self._read_txt(file_path)
|
||||
if values:
|
||||
collected[key] = values
|
||||
elif lower_name.endswith((".yaml", ".yml")):
|
||||
payload = self._read_yaml(file_path)
|
||||
self._merge_nested_entries(collected, payload)
|
||||
elif lower_name.endswith(".json"):
|
||||
payload = self._read_json(file_path)
|
||||
self._merge_nested_entries(collected, payload)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
logger.warning("Failed to load wildcard file %s: %s", file_path, exc)
|
||||
|
||||
return collected
|
||||
|
||||
def _read_txt(self, file_path: str) -> list[str]:
|
||||
try:
|
||||
with open(file_path, "r", encoding="utf-8", errors="ignore") as handle:
|
||||
return [line.strip() for line in handle.read().splitlines() if line.strip()]
|
||||
except OSError as exc:
|
||||
logger.warning("Failed to read wildcard txt file %s: %s", file_path, exc)
|
||||
return []
|
||||
|
||||
def _read_yaml(self, file_path: str) -> Any:
|
||||
with open(file_path, "r", encoding="utf-8") as handle:
|
||||
return yaml.safe_load(handle) or {}
|
||||
|
||||
def _read_json(self, file_path: str) -> Any:
|
||||
with open(file_path, "r", encoding="utf-8") as handle:
|
||||
return json.load(handle)
|
||||
|
||||
def _merge_nested_entries(
|
||||
self, collected: dict[str, list[str]], payload: Any
|
||||
) -> None:
|
||||
for key, values in self._flatten_payload(payload):
|
||||
collected[key] = values
|
||||
|
||||
def _flatten_payload(
|
||||
self, payload: Any, prefix: str = ""
|
||||
) -> list[tuple[str, list[str]]]:
|
||||
entries: list[tuple[str, list[str]]] = []
|
||||
|
||||
if isinstance(payload, dict):
|
||||
for key, value in payload.items():
|
||||
next_prefix = f"{prefix}/{key}" if prefix else str(key)
|
||||
entries.extend(self._flatten_payload(value, next_prefix))
|
||||
return entries
|
||||
|
||||
if isinstance(payload, list):
|
||||
normalized_prefix = _normalize_wildcard_key(prefix)
|
||||
values = [value.strip() for value in payload if isinstance(value, str) and value.strip()]
|
||||
if normalized_prefix and values:
|
||||
entries.append((normalized_prefix, values))
|
||||
return entries
|
||||
|
||||
return entries
|
||||
|
||||
def _score_entry(
|
||||
self, key: str, normalized_term: str, compact_term: str
|
||||
) -> int | None:
|
||||
key_compact = key.replace("/", "")
|
||||
if key == normalized_term:
|
||||
return 5000
|
||||
if key.startswith(normalized_term):
|
||||
return 4000
|
||||
if f"/{normalized_term}" in key:
|
||||
return 3500
|
||||
if normalized_term in key:
|
||||
return 3000
|
||||
if compact_term and key_compact.startswith(compact_term):
|
||||
return 2500
|
||||
if compact_term and compact_term in key_compact:
|
||||
return 2000
|
||||
return None
|
||||
|
||||
def _expand_options_only(self, text: str, rng: random.Random) -> str:
|
||||
current = text
|
||||
remaining_depth = 100
|
||||
while remaining_depth > 0:
|
||||
remaining_depth -= 1
|
||||
current, replaced = self._replace_options(current, rng)
|
||||
if not replaced:
|
||||
break
|
||||
return current
|
||||
|
||||
def _replace_options(
|
||||
self, text: str, rng: random.Random
|
||||
) -> tuple[str, bool]:
|
||||
replaced_any = False
|
||||
|
||||
def replace_option(match: re.Match[str]) -> str:
|
||||
nonlocal replaced_any
|
||||
replacement = self._resolve_option_group(match.group(1), rng)
|
||||
replaced_any = True
|
||||
return replacement
|
||||
|
||||
return _OPTION_PATTERN.sub(replace_option, text), replaced_any
|
||||
|
||||
def _resolve_option_group(self, group_text: str, rng: random.Random) -> str:
|
||||
options = group_text.split("|")
|
||||
multi_select_pattern = options[0].split("$$")
|
||||
select_range: tuple[int, int] | None = None
|
||||
select_separator = " "
|
||||
|
||||
if len(multi_select_pattern) > 1:
|
||||
count_spec = multi_select_pattern[0]
|
||||
range_match = re.match(r"(\d+)(-(\d+))?$", count_spec)
|
||||
shorthand_match = re.match(r"-(\d+)$", count_spec)
|
||||
if range_match:
|
||||
start_text = range_match.group(1)
|
||||
end_text = range_match.group(3)
|
||||
if end_text is not None and _is_numeric_string(start_text) and _is_numeric_string(end_text):
|
||||
select_range = (int(start_text), int(end_text))
|
||||
elif _is_numeric_string(start_text):
|
||||
value = int(start_text)
|
||||
select_range = (value, value)
|
||||
elif shorthand_match:
|
||||
end_text = shorthand_match.group(1)
|
||||
if _is_numeric_string(end_text):
|
||||
select_range = (1, int(end_text))
|
||||
|
||||
if select_range is not None and len(multi_select_pattern) == 2:
|
||||
options[0] = multi_select_pattern[1]
|
||||
elif select_range is not None and len(multi_select_pattern) >= 3:
|
||||
select_separator = multi_select_pattern[1]
|
||||
options[0] = multi_select_pattern[2]
|
||||
|
||||
weighted_options: list[tuple[float, str]] = []
|
||||
for option in options:
|
||||
weight = 1.0
|
||||
parts = option.split("::", 1)
|
||||
if len(parts) == 2 and _is_numeric_string(parts[0].strip()):
|
||||
weight = float(parts[0].strip())
|
||||
weighted_options.append((weight, option))
|
||||
|
||||
if select_range is None:
|
||||
selection_count = 1
|
||||
else:
|
||||
selection_count = rng.randint(select_range[0], select_range[1])
|
||||
|
||||
if selection_count <= 1:
|
||||
return self._strip_weight_prefix(self._weighted_choice(weighted_options, rng))
|
||||
|
||||
selection_count = min(selection_count, len(weighted_options))
|
||||
selected: list[str] = []
|
||||
used_indexes: set[int] = set()
|
||||
while len(selected) < selection_count:
|
||||
picked_index = self._weighted_choice_index(weighted_options, rng)
|
||||
if picked_index in used_indexes:
|
||||
if len(used_indexes) == len(weighted_options):
|
||||
break
|
||||
continue
|
||||
used_indexes.add(picked_index)
|
||||
selected.append(
|
||||
self._strip_weight_prefix(weighted_options[picked_index][1])
|
||||
)
|
||||
|
||||
return select_separator.join(selected)
|
||||
|
||||
def _weighted_choice(
|
||||
self, weighted_options: list[tuple[float, str]], rng: random.Random
|
||||
) -> str:
|
||||
return weighted_options[self._weighted_choice_index(weighted_options, rng)][1]
|
||||
|
||||
def _weighted_choice_index(
|
||||
self, weighted_options: list[tuple[float, str]], rng: random.Random
|
||||
) -> int:
|
||||
total_weight = sum(max(weight, 0.0) for weight, _value in weighted_options)
|
||||
if total_weight <= 0:
|
||||
return rng.randrange(len(weighted_options))
|
||||
|
||||
threshold = rng.uniform(0, total_weight)
|
||||
cumulative = 0.0
|
||||
for index, (weight, _value) in enumerate(weighted_options):
|
||||
cumulative += max(weight, 0.0)
|
||||
if threshold <= cumulative:
|
||||
return index
|
||||
return len(weighted_options) - 1
|
||||
|
||||
def _strip_weight_prefix(self, value: str) -> str:
|
||||
return _WEIGHTED_OPTION_PATTERN.sub("", value, count=1)
|
||||
|
||||
def _replace_wildcards(
|
||||
self,
|
||||
text: str,
|
||||
rng: random.Random,
|
||||
wildcard_dict: dict[str, list[str]],
|
||||
) -> tuple[str, bool]:
|
||||
replaced_any = False
|
||||
|
||||
def replace_match(match: re.Match[str]) -> str:
|
||||
nonlocal replaced_any
|
||||
replacement = self._resolve_wildcard_match(match.group(1), rng, wildcard_dict)
|
||||
if replacement is None:
|
||||
return match.group(0)
|
||||
replaced_any = True
|
||||
return replacement
|
||||
|
||||
return _WILDCARD_PATTERN.sub(replace_match, text), replaced_any
|
||||
|
||||
def _resolve_wildcard_match(
|
||||
self,
|
||||
raw_key: str,
|
||||
rng: random.Random,
|
||||
wildcard_dict: dict[str, list[str]],
|
||||
) -> str | None:
|
||||
keyword = _normalize_wildcard_key(raw_key)
|
||||
if keyword in wildcard_dict:
|
||||
return rng.choice(wildcard_dict[keyword])
|
||||
|
||||
if "*" in keyword:
|
||||
regex_pattern = keyword.replace("*", ".*").replace("+", r"\+")
|
||||
compiled = re.compile(f"^{regex_pattern}$")
|
||||
aggregated: list[str] = []
|
||||
for key, values in wildcard_dict.items():
|
||||
if compiled.match(key):
|
||||
aggregated.extend(values)
|
||||
if aggregated:
|
||||
return rng.choice(aggregated)
|
||||
|
||||
if "/" not in keyword:
|
||||
fallback_keyword = _normalize_wildcard_key(f"*/{keyword}")
|
||||
if fallback_keyword != keyword:
|
||||
return self._resolve_wildcard_match(fallback_keyword, rng, wildcard_dict)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def is_trigger_words_input(name: str) -> bool:
|
||||
return bool(_TRIGGER_WORD_PATTERN.match(name))
|
||||
|
||||
|
||||
def get_wildcard_service() -> WildcardService:
|
||||
return WildcardService.get_instance()
|
||||
|
||||
|
||||
__all__ = [
|
||||
"WildcardService",
|
||||
"WildcardMetadata",
|
||||
"contains_dynamic_syntax",
|
||||
"get_wildcard_service",
|
||||
"get_wildcards_dir",
|
||||
"is_trigger_words_input",
|
||||
]
|
||||
@@ -11,6 +11,8 @@ Target structure:
|
||||
│ └── symlink_map.json
|
||||
├── model/
|
||||
│ └── {library_name}.sqlite
|
||||
├── model_update/
|
||||
│ └── {library_name}.sqlite
|
||||
├── recipe/
|
||||
│ └── {library_name}.sqlite
|
||||
└── fts/
|
||||
@@ -36,6 +38,7 @@ class CacheType(Enum):
|
||||
"""Types of cache files managed by the cache path resolver."""
|
||||
|
||||
MODEL = "model"
|
||||
MODEL_UPDATE = "model_update"
|
||||
RECIPE = "recipe"
|
||||
RECIPE_FTS = "recipe_fts"
|
||||
TAG_FTS = "tag_fts"
|
||||
@@ -45,6 +48,7 @@ class CacheType(Enum):
|
||||
# Subdirectory structure for each cache type
|
||||
_CACHE_SUBDIRS = {
|
||||
CacheType.MODEL: "model",
|
||||
CacheType.MODEL_UPDATE: "model_update",
|
||||
CacheType.RECIPE: "recipe",
|
||||
CacheType.RECIPE_FTS: "fts",
|
||||
CacheType.TAG_FTS: "fts",
|
||||
@@ -54,6 +58,7 @@ _CACHE_SUBDIRS = {
|
||||
# Filename patterns for each cache type
|
||||
_CACHE_FILENAMES = {
|
||||
CacheType.MODEL: "{library_name}.sqlite",
|
||||
CacheType.MODEL_UPDATE: "{library_name}.sqlite",
|
||||
CacheType.RECIPE: "{library_name}.sqlite",
|
||||
CacheType.RECIPE_FTS: "recipe_fts.sqlite",
|
||||
CacheType.TAG_FTS: "tag_fts.sqlite",
|
||||
|
||||
@@ -2,10 +2,13 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any, Dict, Iterable, Mapping, Sequence
|
||||
from urllib.parse import urlparse, urlunparse
|
||||
from urllib.parse import parse_qs, urlparse, urlunparse
|
||||
|
||||
|
||||
_SUPPORTED_CIVITAI_PAGE_HOSTS = frozenset({"civitai.com", "civitai.red"})
|
||||
DEFAULT_CIVITAI_PAGE_HOST = "civitai.com"
|
||||
_DEFAULT_ALLOW_COMMERCIAL_USE: Sequence[str] = ("Sell",)
|
||||
_LICENSE_DEFAULTS: Dict[str, Any] = {
|
||||
"allowNoCredit": True,
|
||||
@@ -17,12 +20,141 @@ _COMMERCIAL_ALLOWED_VALUES = {"sell", "rent", "rentcivit", "image"}
|
||||
_COMMERCIAL_SHIFT = 1
|
||||
|
||||
|
||||
def is_supported_civitai_page_host(hostname: str | None) -> bool:
|
||||
"""Return whether the hostname is a supported Civitai page domain."""
|
||||
|
||||
if not hostname:
|
||||
return False
|
||||
return hostname.lower() in _SUPPORTED_CIVITAI_PAGE_HOSTS
|
||||
|
||||
|
||||
def normalize_civitai_page_host(hostname: str | None) -> str:
|
||||
"""Return a supported Civitai page host or the default host."""
|
||||
|
||||
if not isinstance(hostname, str):
|
||||
return DEFAULT_CIVITAI_PAGE_HOST
|
||||
|
||||
normalized = hostname.strip().lower()
|
||||
if is_supported_civitai_page_host(normalized):
|
||||
return normalized
|
||||
|
||||
return DEFAULT_CIVITAI_PAGE_HOST
|
||||
|
||||
|
||||
def build_civitai_model_page_url(
|
||||
model_id: str | int | None,
|
||||
version_id: str | int | None = None,
|
||||
*,
|
||||
host: str | None = None,
|
||||
) -> str | None:
|
||||
"""Build a Civitai model or model-version page URL."""
|
||||
|
||||
normalized_host = normalize_civitai_page_host(host)
|
||||
normalized_model_id = str(model_id).strip() if model_id is not None else ""
|
||||
normalized_version_id = str(version_id).strip() if version_id is not None else ""
|
||||
|
||||
if normalized_model_id:
|
||||
path = f"/models/{normalized_model_id}"
|
||||
query = f"modelVersionId={normalized_version_id}" if normalized_version_id else ""
|
||||
return urlunparse(("https", normalized_host, path, "", query, ""))
|
||||
|
||||
if normalized_version_id:
|
||||
return urlunparse(
|
||||
("https", normalized_host, f"/model-versions/{normalized_version_id}", "", "", "")
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _parse_supported_civitai_page_url(url: str | None):
|
||||
if not url:
|
||||
return None
|
||||
|
||||
try:
|
||||
parsed = urlparse(url)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
if parsed.scheme not in {"http", "https"}:
|
||||
return None
|
||||
|
||||
if not is_supported_civitai_page_host(parsed.hostname):
|
||||
return None
|
||||
|
||||
return parsed
|
||||
|
||||
|
||||
def extract_civitai_model_url_parts(
|
||||
url: str | None,
|
||||
) -> tuple[str | None, str | None]:
|
||||
"""Extract model and version identifiers from a supported Civitai model URL."""
|
||||
|
||||
parsed = _parse_supported_civitai_page_url(url)
|
||||
if parsed is None:
|
||||
return None, None
|
||||
|
||||
path_match = re.search(r"/models/(\d+)", parsed.path)
|
||||
if not path_match:
|
||||
return None, None
|
||||
|
||||
model_id = path_match.group(1)
|
||||
|
||||
query_params = parse_qs(parsed.query)
|
||||
version_values = query_params.get("modelVersionId") or []
|
||||
version_id = version_values[0] if version_values else None
|
||||
return model_id, version_id
|
||||
|
||||
|
||||
def extract_civitai_image_id(url: str | None) -> str | None:
|
||||
"""Extract the image identifier from a supported Civitai image page URL."""
|
||||
|
||||
parsed = _parse_supported_civitai_page_url(url)
|
||||
if parsed is None:
|
||||
return None
|
||||
|
||||
path_match = re.search(r"/images/(\d+)", parsed.path)
|
||||
if not path_match:
|
||||
return None
|
||||
|
||||
return path_match.group(1)
|
||||
|
||||
|
||||
def normalize_civitai_download_url(url: str | None) -> str | None:
|
||||
"""Rewrite Civitai download URLs to the canonical authenticated host."""
|
||||
|
||||
if not url:
|
||||
return url
|
||||
|
||||
try:
|
||||
parsed = urlparse(url)
|
||||
except ValueError:
|
||||
return url
|
||||
|
||||
hostname = parsed.hostname.lower() if parsed.hostname else None
|
||||
if hostname != "civitai.red" or not parsed.path.startswith("/api/download/"):
|
||||
return url
|
||||
|
||||
return urlunparse(parsed._replace(netloc="civitai.com"))
|
||||
|
||||
|
||||
def extract_civitai_page_host(url: str | None) -> str | None:
|
||||
"""Extract the supported Civitai page host from a URL."""
|
||||
|
||||
parsed = _parse_supported_civitai_page_url(url)
|
||||
if parsed is None:
|
||||
return None
|
||||
|
||||
return parsed.hostname.lower() if parsed.hostname else None
|
||||
|
||||
|
||||
def _normalize_commercial_values(value: Any) -> Sequence[str]:
|
||||
"""Return a normalized list of commercial permissions preserving source values."""
|
||||
|
||||
def _split_aggregate(value_str: str) -> list[str]:
|
||||
stripped = value_str.strip()
|
||||
looks_aggregate = "," in stripped or (stripped.startswith("{") and stripped.endswith("}"))
|
||||
looks_aggregate = "," in stripped or (
|
||||
stripped.startswith("{") and stripped.endswith("}")
|
||||
)
|
||||
if not looks_aggregate:
|
||||
return [value_str]
|
||||
|
||||
@@ -141,14 +273,18 @@ def build_license_flags(payload: Mapping[str, Any] | None) -> int:
|
||||
return flags
|
||||
|
||||
|
||||
def resolve_license_info(model_data: Mapping[str, Any] | None) -> tuple[Dict[str, Any], int]:
|
||||
def resolve_license_info(
|
||||
model_data: Mapping[str, Any] | None,
|
||||
) -> tuple[Dict[str, Any], int]:
|
||||
"""Return normalized license payload and its encoded bitset."""
|
||||
|
||||
payload = resolve_license_payload(model_data)
|
||||
return payload, build_license_flags(payload)
|
||||
|
||||
|
||||
def rewrite_preview_url(source_url: str | None, media_type: str | None = None) -> tuple[str | None, bool]:
|
||||
def rewrite_preview_url(
|
||||
source_url: str | None, media_type: str | None = None
|
||||
) -> tuple[str | None, bool]:
|
||||
"""Rewrite Civitai preview URLs to use optimized renditions.
|
||||
|
||||
Args:
|
||||
@@ -168,7 +304,12 @@ def rewrite_preview_url(source_url: str | None, media_type: str | None = None) -
|
||||
except ValueError:
|
||||
return source_url, False
|
||||
|
||||
if parsed.netloc.lower() != "image.civitai.com":
|
||||
hostname = parsed.hostname
|
||||
if hostname is None:
|
||||
return source_url, False
|
||||
|
||||
hostname = hostname.lower()
|
||||
if hostname == "civitai.com" or not hostname.endswith(".civitai.com"):
|
||||
return source_url, False
|
||||
|
||||
replacement = "/width=450,optimized=true"
|
||||
@@ -188,6 +329,10 @@ def rewrite_preview_url(source_url: str | None, media_type: str | None = None) -
|
||||
|
||||
__all__ = [
|
||||
"build_license_flags",
|
||||
"extract_civitai_image_id",
|
||||
"extract_civitai_page_host",
|
||||
"extract_civitai_model_url_parts",
|
||||
"is_supported_civitai_page_host",
|
||||
"resolve_license_payload",
|
||||
"resolve_license_info",
|
||||
"rewrite_preview_url",
|
||||
|
||||
@@ -100,7 +100,9 @@ DEFAULT_PRIORITY_TAG_CONFIG = {
|
||||
# These model types are incorrectly labeled as "checkpoint" by CivitAI but are actually diffusion models
|
||||
DIFFUSION_MODEL_BASE_MODELS = frozenset(
|
||||
[
|
||||
"Anima",
|
||||
"ZImageTurbo",
|
||||
"ZImageBase",
|
||||
"Wan Video 1.3B t2v",
|
||||
"Wan Video 14B t2v",
|
||||
"Wan Video 14B i2v 480p",
|
||||
@@ -110,6 +112,71 @@ DIFFUSION_MODEL_BASE_MODELS = frozenset(
|
||||
"Wan Video 2.2 T2V-A14B",
|
||||
"Wan Video 2.5 T2V",
|
||||
"Wan Video 2.5 I2V",
|
||||
"CogVideoX",
|
||||
"Mochi",
|
||||
"Qwen",
|
||||
]
|
||||
)
|
||||
|
||||
# Supported baseModel values for download exclusion settings.
|
||||
# Keep this aligned with static/js/utils/constants.js, excluding the generic "Other" value.
|
||||
SUPPORTED_DOWNLOAD_SKIP_BASE_MODELS = frozenset(
|
||||
[
|
||||
"SD 1.4",
|
||||
"SD 1.5",
|
||||
"SD 1.5 LCM",
|
||||
"SD 1.5 Hyper",
|
||||
"SD 2.0",
|
||||
"SD 2.1",
|
||||
"SD 3",
|
||||
"SD 3.5",
|
||||
"SD 3.5 Medium",
|
||||
"SD 3.5 Large",
|
||||
"SD 3.5 Large Turbo",
|
||||
"SDXL 1.0",
|
||||
"SDXL Lightning",
|
||||
"SDXL Hyper",
|
||||
"Flux.1 D",
|
||||
"Flux.1 S",
|
||||
"Flux.1 Krea",
|
||||
"Flux.1 Kontext",
|
||||
"Flux.2 D",
|
||||
"Flux.2 Klein 9B",
|
||||
"Flux.2 Klein 9B-base",
|
||||
"Flux.2 Klein 4B",
|
||||
"Flux.2 Klein 4B-base",
|
||||
"AuraFlow",
|
||||
"Chroma",
|
||||
"PixArt a",
|
||||
"PixArt E",
|
||||
"Hunyuan 1",
|
||||
"Lumina",
|
||||
"Kolors",
|
||||
"NoobAI",
|
||||
"Illustrious",
|
||||
"Pony",
|
||||
"Pony V7",
|
||||
"HiDream",
|
||||
"Qwen",
|
||||
"ZImageTurbo",
|
||||
"ZImageBase",
|
||||
"SVD",
|
||||
"LTXV",
|
||||
"LTXV2",
|
||||
"LTXV 2.3",
|
||||
"CogVideoX",
|
||||
"Mochi",
|
||||
"Wan Video",
|
||||
"Wan Video 1.3B t2v",
|
||||
"Wan Video 14B t2v",
|
||||
"Wan Video 14B i2v 480p",
|
||||
"Wan Video 14B i2v 720p",
|
||||
"Wan Video 2.2 TI2V-5B",
|
||||
"Wan Video 2.2 T2V-A14B",
|
||||
"Wan Video 2.2 I2V-A14B",
|
||||
"Wan Video 2.5 T2V",
|
||||
"Wan Video 2.5 I2V",
|
||||
"Hunyuan Video",
|
||||
"Anima",
|
||||
]
|
||||
)
|
||||
|
||||
@@ -1,17 +1,81 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from urllib.parse import quote
|
||||
|
||||
from aiohttp import web
|
||||
from ..services.settings_manager import get_settings_manager
|
||||
from ..utils.example_images_paths import (
|
||||
get_model_folder,
|
||||
get_model_relative_path,
|
||||
)
|
||||
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
_WINDOWS_DRIVE_PATTERN = re.compile(r"^[A-Za-z]:/")
|
||||
|
||||
|
||||
def _is_within_root(path: str, root: str) -> bool:
|
||||
try:
|
||||
return os.path.commonpath([os.path.abspath(path), os.path.abspath(root)]) == os.path.abspath(root)
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
|
||||
def _join_local_example_path(local_root: str, relative_path: str) -> str:
|
||||
separator = "\\" if "\\" in local_root and "/" not in local_root else "/"
|
||||
normalized_root = local_root.rstrip("\\/")
|
||||
normalized_relative = relative_path.replace("/", separator)
|
||||
if not normalized_root:
|
||||
return normalized_relative
|
||||
return f"{normalized_root}{separator}{normalized_relative}"
|
||||
|
||||
|
||||
def _build_file_uri(path: str) -> str:
|
||||
normalized = path.replace("\\", "/")
|
||||
if _WINDOWS_DRIVE_PATTERN.match(normalized):
|
||||
return f"file:///{quote(normalized, safe='/:')}"
|
||||
if normalized.startswith("/"):
|
||||
return f"file://{quote(normalized, safe='/:')}"
|
||||
return f"file:///{quote(normalized.lstrip('/'), safe='/:')}"
|
||||
|
||||
|
||||
def _render_open_uri_template(template: str, local_path: str, relative_path: str) -> str:
|
||||
file_uri = _build_file_uri(local_path)
|
||||
replacements = {
|
||||
"{{local_path}}": local_path,
|
||||
"{{encoded_local_path}}": quote(local_path, safe=""),
|
||||
"{{relative_path}}": relative_path,
|
||||
"{{encoded_relative_path}}": quote(relative_path, safe=""),
|
||||
"{{file_uri}}": file_uri,
|
||||
"{{encoded_file_uri}}": quote(file_uri, safe=""),
|
||||
}
|
||||
|
||||
rendered = template
|
||||
for placeholder, value in replacements.items():
|
||||
rendered = rendered.replace(placeholder, value)
|
||||
return rendered
|
||||
|
||||
|
||||
def _open_system_folder(model_folder: str) -> dict[str, object]:
|
||||
if os.name == "nt": # Windows
|
||||
os.startfile(model_folder)
|
||||
elif os.name == "posix": # macOS and Linux
|
||||
if sys.platform == "darwin": # macOS
|
||||
subprocess.Popen(["open", model_folder])
|
||||
else: # Linux
|
||||
subprocess.Popen(["xdg-open", model_folder])
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"Opened example images folder for {model_folder}",
|
||||
"path": model_folder,
|
||||
}
|
||||
|
||||
|
||||
class ExampleImagesFileManager:
|
||||
"""Manages access and operations for example image files"""
|
||||
|
||||
@@ -54,7 +118,7 @@ class ExampleImagesFileManager:
|
||||
}, status=500)
|
||||
|
||||
# Path validation: ensure model_folder is under example_images_path
|
||||
if not model_folder.startswith(os.path.abspath(example_images_path)):
|
||||
if not _is_within_root(model_folder, example_images_path):
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
'error': 'Invalid model folder path'
|
||||
@@ -66,20 +130,40 @@ class ExampleImagesFileManager:
|
||||
'success': False,
|
||||
'error': 'No example images found for this model. Download example images first.'
|
||||
}, status=404)
|
||||
|
||||
# Open folder in file explorer
|
||||
if os.name == 'nt': # Windows
|
||||
os.startfile(model_folder)
|
||||
elif os.name == 'posix': # macOS and Linux
|
||||
if sys.platform == 'darwin': # macOS
|
||||
subprocess.Popen(['open', model_folder])
|
||||
else: # Linux
|
||||
subprocess.Popen(['xdg-open', model_folder])
|
||||
|
||||
return web.json_response({
|
||||
'success': True,
|
||||
'message': f'Opened example images folder for model {model_hash}'
|
||||
})
|
||||
|
||||
root_path = os.path.abspath(example_images_path)
|
||||
relative_path = os.path.relpath(model_folder, root_path).replace("\\", "/")
|
||||
open_mode = settings_manager.get("example_images_open_mode") or "system"
|
||||
|
||||
if open_mode == "clipboard":
|
||||
local_root = settings_manager.get("example_images_local_root") or root_path
|
||||
local_path = _join_local_example_path(local_root, relative_path)
|
||||
return web.json_response({
|
||||
'success': True,
|
||||
'mode': 'clipboard',
|
||||
'path': local_path,
|
||||
'relative_path': relative_path,
|
||||
})
|
||||
|
||||
if open_mode == "uri_template":
|
||||
local_root = settings_manager.get("example_images_local_root") or root_path
|
||||
uri_template = settings_manager.get("example_images_open_uri_template") or ""
|
||||
if not uri_template.strip():
|
||||
return web.json_response({
|
||||
'success': False,
|
||||
'error': 'No example image open URI template configured.'
|
||||
}, status=400)
|
||||
|
||||
local_path = _join_local_example_path(local_root, relative_path)
|
||||
return web.json_response({
|
||||
'success': True,
|
||||
'mode': 'uri',
|
||||
'path': local_path,
|
||||
'relative_path': relative_path,
|
||||
'uri': _render_open_uri_template(uri_template, local_path, relative_path),
|
||||
})
|
||||
|
||||
return web.json_response(_open_system_folder(model_folder))
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to open example images folder: {e}", exc_info=True)
|
||||
@@ -143,7 +227,7 @@ class ExampleImagesFileManager:
|
||||
file_ext = os.path.splitext(file)[1].lower()
|
||||
if (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
|
||||
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
|
||||
relative_path = get_model_relative_path(model_hash)
|
||||
relative_path = os.path.relpath(model_folder, os.path.abspath(example_images_path)).replace("\\", "/")
|
||||
files.append({
|
||||
'name': file,
|
||||
'path': f'/example_images_static/{relative_path}/{file}',
|
||||
@@ -227,4 +311,4 @@ class ExampleImagesFileManager:
|
||||
return web.json_response({
|
||||
'has_images': False,
|
||||
'error': str(e)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,15 +1,142 @@
|
||||
import piexif
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
from io import BytesIO
|
||||
import os
|
||||
from io import BytesIO
|
||||
from typing import Any, Optional
|
||||
|
||||
import piexif
|
||||
from PIL import Image, PngImagePlugin
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ExifUtils:
|
||||
"""Utility functions for working with EXIF data in images"""
|
||||
|
||||
@staticmethod
|
||||
def _decode_user_comment(user_comment: Any) -> Optional[str]:
|
||||
if user_comment is None:
|
||||
return None
|
||||
if isinstance(user_comment, bytes):
|
||||
if user_comment.startswith(b"UNICODE\0"):
|
||||
return user_comment[8:].decode("utf-16be", errors="ignore")
|
||||
return user_comment.decode("utf-8", errors="ignore")
|
||||
if isinstance(user_comment, str):
|
||||
return user_comment
|
||||
return str(user_comment)
|
||||
|
||||
@staticmethod
|
||||
def _decode_exif_text(value: Any) -> Optional[str]:
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, bytes):
|
||||
return value.decode("utf-8", errors="ignore")
|
||||
if isinstance(value, str):
|
||||
return value
|
||||
return str(value)
|
||||
|
||||
@staticmethod
|
||||
def _load_structured_metadata(image_path: str) -> dict[str, Optional[str]]:
|
||||
metadata = {
|
||||
"parameters": None,
|
||||
"prompt": None,
|
||||
"workflow": None,
|
||||
"comment": None,
|
||||
}
|
||||
|
||||
with Image.open(image_path) as img:
|
||||
info = getattr(img, "info", {}) or {}
|
||||
|
||||
if "parameters" in info:
|
||||
metadata["parameters"] = info["parameters"]
|
||||
if "prompt" in info:
|
||||
metadata["prompt"] = info["prompt"]
|
||||
if "workflow" in info:
|
||||
metadata["workflow"] = info["workflow"]
|
||||
|
||||
if img.format not in ["JPEG", "TIFF", "WEBP"]:
|
||||
exif = img.getexif()
|
||||
if exif and piexif.ExifIFD.UserComment in exif:
|
||||
metadata["comment"] = ExifUtils._decode_user_comment(
|
||||
exif[piexif.ExifIFD.UserComment]
|
||||
)
|
||||
|
||||
try:
|
||||
exif_dict = piexif.load(image_path)
|
||||
except Exception as e:
|
||||
logger.debug(f"Error loading EXIF data: {e}")
|
||||
exif_dict = {}
|
||||
|
||||
if piexif.ExifIFD.UserComment in exif_dict.get("Exif", {}):
|
||||
metadata["comment"] = ExifUtils._decode_user_comment(
|
||||
exif_dict["Exif"][piexif.ExifIFD.UserComment]
|
||||
)
|
||||
|
||||
image_description = ExifUtils._decode_exif_text(
|
||||
exif_dict.get("0th", {}).get(piexif.ImageIFD.ImageDescription)
|
||||
)
|
||||
if image_description:
|
||||
if image_description.startswith("Workflow:"):
|
||||
metadata["workflow"] = image_description[len("Workflow:") :]
|
||||
elif not metadata["prompt"]:
|
||||
metadata["prompt"] = image_description
|
||||
|
||||
if not metadata["parameters"] and metadata["comment"]:
|
||||
metadata["parameters"] = metadata["comment"]
|
||||
|
||||
return metadata
|
||||
|
||||
@staticmethod
|
||||
def _build_pnginfo(img: Image.Image, metadata_fields: dict[str, Optional[str]]) -> PngImagePlugin.PngInfo:
|
||||
png_info = PngImagePlugin.PngInfo()
|
||||
existing_info = getattr(img, "info", {}) or {}
|
||||
managed_keys = {"parameters", "prompt", "workflow"}
|
||||
|
||||
for key, value in existing_info.items():
|
||||
if key in {"exif", "dpi", "transparency", "gamma", "aspect"}:
|
||||
continue
|
||||
if key in managed_keys:
|
||||
continue
|
||||
if isinstance(value, str):
|
||||
png_info.add_text(key, value)
|
||||
|
||||
for key in managed_keys:
|
||||
value = metadata_fields.get(key)
|
||||
if value:
|
||||
png_info.add_text(key, value)
|
||||
|
||||
return png_info
|
||||
|
||||
@staticmethod
|
||||
def _build_exif_bytes(
|
||||
metadata_fields: dict[str, Optional[str]], existing_exif: bytes | None = None
|
||||
) -> bytes:
|
||||
try:
|
||||
exif_dict = piexif.load(existing_exif or b"")
|
||||
except Exception:
|
||||
exif_dict = {"0th": {}, "Exif": {}, "GPS": {}, "Interop": {}, "1st": {}}
|
||||
|
||||
exif_dict.setdefault("0th", {})
|
||||
exif_dict.setdefault("Exif", {})
|
||||
|
||||
parameters = metadata_fields.get("parameters")
|
||||
workflow = metadata_fields.get("workflow")
|
||||
prompt = metadata_fields.get("prompt")
|
||||
|
||||
if parameters:
|
||||
exif_dict["Exif"][piexif.ExifIFD.UserComment] = (
|
||||
b"UNICODE\0" + parameters.encode("utf-16be")
|
||||
)
|
||||
else:
|
||||
exif_dict["Exif"].pop(piexif.ExifIFD.UserComment, None)
|
||||
|
||||
if workflow:
|
||||
exif_dict["0th"][piexif.ImageIFD.ImageDescription] = f"Workflow:{workflow}"
|
||||
elif prompt:
|
||||
exif_dict["0th"][piexif.ImageIFD.ImageDescription] = prompt
|
||||
else:
|
||||
exif_dict["0th"].pop(piexif.ImageIFD.ImageDescription, None)
|
||||
|
||||
return piexif.dump(exif_dict)
|
||||
|
||||
@staticmethod
|
||||
def extract_image_metadata(image_path: str) -> Optional[str]:
|
||||
@@ -28,48 +155,12 @@ class ExifUtils:
|
||||
if ext in ['.mp4', '.webm']:
|
||||
return None
|
||||
|
||||
# First try to open the image
|
||||
with Image.open(image_path) as img:
|
||||
# Method 1: Check for parameters in image info
|
||||
if hasattr(img, 'info') and 'parameters' in img.info:
|
||||
return img.info['parameters']
|
||||
|
||||
# Method 2: Check EXIF UserComment field
|
||||
if img.format not in ['JPEG', 'TIFF', 'WEBP']:
|
||||
# For non-JPEG/TIFF/WEBP images, try to get EXIF through PIL
|
||||
exif = img.getexif()
|
||||
if exif and piexif.ExifIFD.UserComment in exif:
|
||||
user_comment = exif[piexif.ExifIFD.UserComment]
|
||||
if isinstance(user_comment, bytes):
|
||||
if user_comment.startswith(b'UNICODE\0'):
|
||||
return user_comment[8:].decode('utf-16be')
|
||||
return user_comment.decode('utf-8', errors='ignore')
|
||||
return user_comment
|
||||
|
||||
# For JPEG/TIFF/WEBP, use piexif
|
||||
try:
|
||||
exif_dict = piexif.load(image_path)
|
||||
|
||||
if piexif.ExifIFD.UserComment in exif_dict.get('Exif', {}):
|
||||
user_comment = exif_dict['Exif'][piexif.ExifIFD.UserComment]
|
||||
if isinstance(user_comment, bytes):
|
||||
if user_comment.startswith(b'UNICODE\0'):
|
||||
user_comment = user_comment[8:].decode('utf-16be')
|
||||
else:
|
||||
user_comment = user_comment.decode('utf-8', errors='ignore')
|
||||
return user_comment
|
||||
except Exception as e:
|
||||
logger.debug(f"Error loading EXIF data: {e}")
|
||||
|
||||
# Method 3: Check PNG metadata for workflow info (for ComfyUI images)
|
||||
if img.format == 'PNG':
|
||||
# Look for workflow or prompt metadata in PNG chunks
|
||||
for key in img.info:
|
||||
if key in ['workflow', 'prompt', 'parameters']:
|
||||
return img.info[key]
|
||||
|
||||
return None
|
||||
|
||||
metadata = ExifUtils._load_structured_metadata(image_path)
|
||||
return (
|
||||
metadata.get("parameters")
|
||||
or metadata.get("prompt")
|
||||
or metadata.get("workflow")
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error extracting image metadata: {e}", exc_info=True)
|
||||
return None
|
||||
@@ -92,50 +183,26 @@ class ExifUtils:
|
||||
if ext in ['.mp4', '.webm']:
|
||||
return image_path
|
||||
|
||||
# Load the image and check its format
|
||||
metadata_fields = ExifUtils._load_structured_metadata(image_path)
|
||||
metadata_fields["parameters"] = metadata
|
||||
|
||||
with Image.open(image_path) as img:
|
||||
img_format = img.format
|
||||
|
||||
# For PNG, try to update parameters directly
|
||||
if img_format == 'PNG':
|
||||
# Use PngInfo instead of plain dictionary
|
||||
png_info = PngImagePlugin.PngInfo()
|
||||
png_info.add_text("parameters", metadata)
|
||||
img.save(image_path, format='PNG', pnginfo=png_info)
|
||||
|
||||
if img_format == "PNG":
|
||||
png_info = ExifUtils._build_pnginfo(img, metadata_fields)
|
||||
img.save(image_path, format="PNG", pnginfo=png_info)
|
||||
return image_path
|
||||
|
||||
# For WebP format, use PIL's exif parameter directly
|
||||
elif img_format == 'WEBP':
|
||||
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
|
||||
exif_bytes = piexif.dump(exif_dict)
|
||||
|
||||
# Save with the exif data
|
||||
img.save(image_path, format='WEBP', exif=exif_bytes, quality=85)
|
||||
return image_path
|
||||
|
||||
# For other formats, use standard EXIF approach
|
||||
else:
|
||||
try:
|
||||
exif_dict = piexif.load(img.info.get('exif', b''))
|
||||
except:
|
||||
exif_dict = {'0th':{}, 'Exif':{}, 'GPS':{}, 'Interop':{}, '1st':{}}
|
||||
|
||||
# If no Exif dictionary exists, create one
|
||||
if 'Exif' not in exif_dict:
|
||||
exif_dict['Exif'] = {}
|
||||
|
||||
# Update the UserComment field - use UNICODE format
|
||||
unicode_bytes = metadata.encode('utf-16be')
|
||||
metadata_bytes = b'UNICODE\0' + unicode_bytes
|
||||
|
||||
exif_dict['Exif'][piexif.ExifIFD.UserComment] = metadata_bytes
|
||||
|
||||
# Convert EXIF dict back to bytes
|
||||
exif_bytes = piexif.dump(exif_dict)
|
||||
|
||||
# Save the image with updated EXIF data
|
||||
img.save(image_path, exif=exif_bytes)
|
||||
|
||||
|
||||
exif_bytes = ExifUtils._build_exif_bytes(
|
||||
metadata_fields, img.info.get("exif")
|
||||
)
|
||||
save_kwargs = {"exif": exif_bytes}
|
||||
if img_format == "WEBP":
|
||||
save_kwargs["quality"] = 85
|
||||
|
||||
img.save(image_path, format=img_format, **save_kwargs)
|
||||
|
||||
return image_path
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating metadata in {image_path}: {e}")
|
||||
@@ -297,12 +364,12 @@ class ExifUtils:
|
||||
raise ValueError(f"Cannot process corrupt image data: {e}")
|
||||
|
||||
# Extract metadata if needed and valid
|
||||
metadata = None
|
||||
metadata_fields = None
|
||||
if preserve_metadata:
|
||||
try:
|
||||
if isinstance(image_data, str) and os.path.exists(image_data):
|
||||
# For file path, extract directly
|
||||
metadata = ExifUtils.extract_image_metadata(image_data)
|
||||
metadata_fields = ExifUtils._load_structured_metadata(image_data)
|
||||
else:
|
||||
# For binary data, save to temp file first
|
||||
import tempfile
|
||||
@@ -310,7 +377,7 @@ class ExifUtils:
|
||||
temp_path = temp_file.name
|
||||
temp_file.write(image_data)
|
||||
try:
|
||||
metadata = ExifUtils.extract_image_metadata(temp_path)
|
||||
metadata_fields = ExifUtils._load_structured_metadata(temp_path)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to extract metadata from temp file: {e}")
|
||||
finally:
|
||||
@@ -363,14 +430,13 @@ class ExifUtils:
|
||||
optimized_data = output.getvalue()
|
||||
|
||||
# Handle metadata preservation if requested and available
|
||||
if preserve_metadata and metadata:
|
||||
if preserve_metadata and metadata_fields:
|
||||
try:
|
||||
if save_format == 'WEBP':
|
||||
# For WebP format, directly save with metadata
|
||||
try:
|
||||
output_with_metadata = BytesIO()
|
||||
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
|
||||
exif_bytes = piexif.dump(exif_dict)
|
||||
exif_bytes = ExifUtils._build_exif_bytes(metadata_fields)
|
||||
resized_img.save(output_with_metadata, format='WEBP', exif=exif_bytes, quality=quality)
|
||||
optimized_data = output_with_metadata.getvalue()
|
||||
except Exception as e:
|
||||
@@ -383,8 +449,9 @@ class ExifUtils:
|
||||
temp_file.write(optimized_data)
|
||||
|
||||
try:
|
||||
# Add metadata
|
||||
ExifUtils.update_image_metadata(temp_path, metadata)
|
||||
ExifUtils.update_image_metadata(
|
||||
temp_path, metadata_fields.get("parameters") or ""
|
||||
)
|
||||
# Read back the file
|
||||
with open(temp_path, 'rb') as f:
|
||||
optimized_data = f.read()
|
||||
|
||||
@@ -2,11 +2,12 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Mapping, Optional, Sequence, Tuple
|
||||
from typing import Any, Mapping, Optional, Sequence, Tuple
|
||||
|
||||
from .constants import NSFW_LEVELS
|
||||
|
||||
PreviewMedia = Mapping[str, object]
|
||||
VALID_MATURE_BLUR_LEVELS = ("PG13", "R", "X", "XXX")
|
||||
|
||||
|
||||
def _extract_nsfw_level(entry: Mapping[str, object]) -> int:
|
||||
@@ -19,17 +20,36 @@ def _extract_nsfw_level(entry: Mapping[str, object]) -> int:
|
||||
return 0
|
||||
|
||||
|
||||
def resolve_mature_threshold(settings: Mapping[str, Any] | None) -> int:
|
||||
"""Resolve the configured mature blur threshold from settings.
|
||||
|
||||
Allowed values are ``PG13``, ``R``, ``X``, and ``XXX``. Any invalid or
|
||||
missing value falls back to ``R``.
|
||||
"""
|
||||
|
||||
if not isinstance(settings, Mapping):
|
||||
return NSFW_LEVELS.get("R", 4)
|
||||
|
||||
raw_level = settings.get("mature_blur_level", "R")
|
||||
normalized = str(raw_level).strip().upper()
|
||||
if normalized not in VALID_MATURE_BLUR_LEVELS:
|
||||
normalized = "R"
|
||||
return NSFW_LEVELS.get(normalized, NSFW_LEVELS.get("R", 4))
|
||||
|
||||
|
||||
def select_preview_media(
|
||||
images: Sequence[Mapping[str, object]] | None,
|
||||
*,
|
||||
blur_mature_content: bool,
|
||||
mature_threshold: int | None = None,
|
||||
) -> Tuple[Optional[PreviewMedia], int]:
|
||||
"""Select the most appropriate preview media entry.
|
||||
|
||||
When ``blur_mature_content`` is enabled we first try to return the first media
|
||||
item with an ``nsfwLevel`` lower than :pydata:`NSFW_LEVELS["R"]`. If none are
|
||||
available we return the media entry with the lowest NSFW level. When the
|
||||
setting is disabled we simply return the first entry.
|
||||
item with an ``nsfwLevel`` lower than the configured mature threshold
|
||||
(defaults to :pydata:`NSFW_LEVELS["R"]`). If none are available we return
|
||||
the media entry with the lowest NSFW level. When the setting is disabled we
|
||||
simply return the first entry.
|
||||
"""
|
||||
|
||||
if not images:
|
||||
@@ -45,7 +65,9 @@ def select_preview_media(
|
||||
if not blur_mature_content:
|
||||
return selected, selected_level
|
||||
|
||||
safe_threshold = NSFW_LEVELS.get("R", 4)
|
||||
safe_threshold = (
|
||||
mature_threshold if isinstance(mature_threshold, int) else NSFW_LEVELS.get("R", 4)
|
||||
)
|
||||
for candidate in candidates:
|
||||
level = _extract_nsfw_level(candidate)
|
||||
if level < safe_threshold:
|
||||
@@ -60,4 +82,4 @@ def select_preview_media(
|
||||
return selected, selected_level
|
||||
|
||||
|
||||
__all__ = ["select_preview_media"]
|
||||
__all__ = ["resolve_mature_threshold", "select_preview_media", "VALID_MATURE_BLUR_LEVELS"]
|
||||
|
||||
136
py/utils/session_logging.py
Normal file
136
py/utils/session_logging.py
Normal file
@@ -0,0 +1,136 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import uuid
|
||||
from collections import deque
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
|
||||
|
||||
LOG_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
_SESSION_HANDLER_NAME = "lora_manager_standalone_session_memory"
|
||||
_FILE_HANDLER_NAME = "lora_manager_standalone_session_file"
|
||||
_session_state: "StandaloneSessionLogState | None" = None
|
||||
_session_lock = threading.Lock()
|
||||
|
||||
|
||||
@dataclass
|
||||
class StandaloneSessionLogState:
|
||||
started_at: str
|
||||
session_id: str
|
||||
log_file_path: str | None
|
||||
memory_handler: "StandaloneSessionMemoryHandler"
|
||||
|
||||
|
||||
class StandaloneSessionMemoryHandler(logging.Handler):
|
||||
def __init__(self, capacity: int = 4000) -> None:
|
||||
super().__init__()
|
||||
self._entries: deque[str] = deque(maxlen=capacity)
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def emit(self, record: logging.LogRecord) -> None:
|
||||
try:
|
||||
rendered = self.format(record)
|
||||
except Exception:
|
||||
rendered = record.getMessage()
|
||||
|
||||
with self._lock:
|
||||
self._entries.append(rendered)
|
||||
|
||||
def render(self, max_lines: int | None = None) -> str:
|
||||
with self._lock:
|
||||
entries = list(self._entries)
|
||||
|
||||
if max_lines is not None and max_lines > 0:
|
||||
entries = entries[-max_lines:]
|
||||
|
||||
if not entries:
|
||||
return ""
|
||||
|
||||
return "\n".join(entries) + "\n"
|
||||
|
||||
|
||||
def _build_log_file_path(settings_file: str | None, started_at: datetime) -> str | None:
|
||||
if not settings_file:
|
||||
return None
|
||||
|
||||
settings_dir = os.path.dirname(os.path.abspath(settings_file))
|
||||
log_dir = os.path.join(settings_dir, "logs")
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
timestamp = started_at.strftime("%Y%m%dT%H%M%SZ")
|
||||
return os.path.join(log_dir, f"standalone-session-{timestamp}.log")
|
||||
|
||||
|
||||
def setup_standalone_session_logging(settings_file: str | None) -> StandaloneSessionLogState:
|
||||
global _session_state
|
||||
|
||||
with _session_lock:
|
||||
if _session_state is not None:
|
||||
return _session_state
|
||||
|
||||
started_dt = datetime.now(timezone.utc)
|
||||
started_at = started_dt.replace(microsecond=0).isoformat()
|
||||
session_id = f"{started_dt.strftime('%Y%m%dT%H%M%SZ')}-{uuid.uuid4().hex[:8]}"
|
||||
formatter = logging.Formatter(LOG_FORMAT)
|
||||
root_logger = logging.getLogger()
|
||||
if root_logger.level > logging.INFO:
|
||||
root_logger.setLevel(logging.INFO)
|
||||
|
||||
memory_handler = StandaloneSessionMemoryHandler()
|
||||
memory_handler.set_name(_SESSION_HANDLER_NAME)
|
||||
memory_handler.setFormatter(formatter)
|
||||
root_logger.addHandler(memory_handler)
|
||||
|
||||
log_file_path = _build_log_file_path(settings_file, started_dt)
|
||||
if log_file_path:
|
||||
file_handler = logging.FileHandler(log_file_path, encoding="utf-8")
|
||||
file_handler.set_name(_FILE_HANDLER_NAME)
|
||||
file_handler.setFormatter(formatter)
|
||||
root_logger.addHandler(file_handler)
|
||||
|
||||
_session_state = StandaloneSessionLogState(
|
||||
started_at=started_at,
|
||||
session_id=session_id,
|
||||
log_file_path=log_file_path,
|
||||
memory_handler=memory_handler,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("lora-manager-standalone")
|
||||
logger.info("LoRA Manager standalone startup time: %s", started_at)
|
||||
logger.info("LoRA Manager standalone session id: %s", session_id)
|
||||
if log_file_path:
|
||||
logger.info("LoRA Manager standalone session log path: %s", log_file_path)
|
||||
|
||||
return _session_state
|
||||
|
||||
|
||||
def get_standalone_session_log_snapshot(max_lines: int = 2000) -> dict[str, Any] | None:
|
||||
state = _session_state
|
||||
if state is None:
|
||||
return None
|
||||
|
||||
return {
|
||||
"started_at": state.started_at,
|
||||
"session_id": state.session_id,
|
||||
"log_file_path": state.log_file_path,
|
||||
"in_memory_text": state.memory_handler.render(max_lines=max_lines),
|
||||
}
|
||||
|
||||
|
||||
def reset_standalone_session_logging_for_tests() -> None:
|
||||
global _session_state
|
||||
|
||||
with _session_lock:
|
||||
root_logger = logging.getLogger()
|
||||
handlers_to_remove = [
|
||||
handler
|
||||
for handler in root_logger.handlers
|
||||
if handler.get_name() in {_SESSION_HANDLER_NAME, _FILE_HANDLER_NAME}
|
||||
]
|
||||
for handler in handlers_to_remove:
|
||||
root_logger.removeHandler(handler)
|
||||
handler.close()
|
||||
_session_state = None
|
||||
@@ -29,6 +29,18 @@ if not standalone_mode:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DEFAULT_CHECKPOINT_EXTENSIONS = {
|
||||
".ckpt",
|
||||
".pt",
|
||||
".pt2",
|
||||
".bin",
|
||||
".pth",
|
||||
".safetensors",
|
||||
".pkl",
|
||||
".sft",
|
||||
".gguf",
|
||||
}
|
||||
|
||||
class UsageStats:
|
||||
"""Track usage statistics for models and save to JSON"""
|
||||
|
||||
@@ -291,6 +303,151 @@ class UsageStats:
|
||||
# Process loras
|
||||
if LORAS in metadata and isinstance(metadata[LORAS], dict):
|
||||
await self._process_loras(metadata[LORAS], today)
|
||||
|
||||
def _increment_usage_counter(self, category: str, stat_key: str, today_date: str) -> None:
|
||||
"""Increment usage counters for a resolved stats key."""
|
||||
if stat_key not in self.stats[category]:
|
||||
self.stats[category][stat_key] = {
|
||||
"total": 0,
|
||||
"history": {}
|
||||
}
|
||||
|
||||
self.stats[category][stat_key]["total"] += 1
|
||||
|
||||
if today_date not in self.stats[category][stat_key]["history"]:
|
||||
self.stats[category][stat_key]["history"][today_date] = 0
|
||||
self.stats[category][stat_key]["history"][today_date] += 1
|
||||
|
||||
def _normalize_model_lookup_name(self, model_name: str) -> str:
|
||||
"""Normalize a model reference to its base filename without extension."""
|
||||
return os.path.splitext(os.path.basename(model_name))[0]
|
||||
|
||||
async def _find_cached_checkpoint_entry(self, checkpoint_scanner, model_name: str):
|
||||
"""Best-effort lookup for a checkpoint cache entry by filename/model name."""
|
||||
get_cached_data = getattr(checkpoint_scanner, "get_cached_data", None)
|
||||
if not callable(get_cached_data):
|
||||
return None
|
||||
|
||||
cache = await get_cached_data()
|
||||
raw_data = getattr(cache, "raw_data", None)
|
||||
if not isinstance(raw_data, list):
|
||||
return None
|
||||
|
||||
normalized_name = self._normalize_model_lookup_name(model_name)
|
||||
for entry in raw_data:
|
||||
if not isinstance(entry, dict):
|
||||
continue
|
||||
|
||||
for candidate_key in ("file_name", "model_name", "file_path"):
|
||||
candidate_value = entry.get(candidate_key)
|
||||
if not candidate_value or not isinstance(candidate_value, str):
|
||||
continue
|
||||
if self._normalize_model_lookup_name(candidate_value) == normalized_name:
|
||||
return entry
|
||||
|
||||
return None
|
||||
|
||||
async def _find_checkpoint_file_on_disk(self, checkpoint_scanner, model_name: str):
|
||||
"""Search checkpoint roots directly for a matching file.
|
||||
|
||||
This is used when usage tracking sees a checkpoint name before the cache has
|
||||
been refreshed. The lookup is intentionally exact: we only match the model
|
||||
basename and supported checkpoint extensions.
|
||||
"""
|
||||
get_model_roots = getattr(checkpoint_scanner, "get_model_roots", None)
|
||||
if not callable(get_model_roots):
|
||||
return None
|
||||
|
||||
roots = [root for root in get_model_roots() if root]
|
||||
if not roots:
|
||||
return None
|
||||
|
||||
supported_extensions = getattr(
|
||||
checkpoint_scanner, "file_extensions", _DEFAULT_CHECKPOINT_EXTENSIONS
|
||||
)
|
||||
if not isinstance(supported_extensions, (set, frozenset, list, tuple)):
|
||||
supported_extensions = _DEFAULT_CHECKPOINT_EXTENSIONS
|
||||
|
||||
normalized_name = self._normalize_model_lookup_name(model_name)
|
||||
matches: list[str] = []
|
||||
|
||||
for root_path in roots:
|
||||
if not os.path.exists(root_path):
|
||||
continue
|
||||
|
||||
for dirpath, _dirnames, filenames in os.walk(root_path):
|
||||
for filename in filenames:
|
||||
extension = os.path.splitext(filename)[1].lower()
|
||||
if extension not in supported_extensions:
|
||||
continue
|
||||
|
||||
if os.path.splitext(filename)[0] != normalized_name:
|
||||
continue
|
||||
|
||||
matches.append(os.path.join(dirpath, filename).replace(os.sep, "/"))
|
||||
|
||||
if len(matches) > 1:
|
||||
logger.warning(
|
||||
"Multiple checkpoint files matched '%s'; skipping usage tracking: %s",
|
||||
normalized_name,
|
||||
", ".join(matches),
|
||||
)
|
||||
return None
|
||||
|
||||
return matches[0] if matches else None
|
||||
|
||||
async def _resolve_checkpoint_hash(self, checkpoint_scanner, model_name: str):
|
||||
"""Resolve a checkpoint hash, calculating pending hashes on demand when needed."""
|
||||
model_filename = self._normalize_model_lookup_name(model_name)
|
||||
model_hash = checkpoint_scanner.get_hash_by_filename(model_filename)
|
||||
if model_hash:
|
||||
return model_hash
|
||||
|
||||
cached_entry = await self._find_cached_checkpoint_entry(checkpoint_scanner, model_name)
|
||||
if cached_entry:
|
||||
cached_hash = cached_entry.get("sha256")
|
||||
if cached_hash:
|
||||
return cached_hash
|
||||
|
||||
hash_status = cached_entry.get("hash_status")
|
||||
if hash_status and hash_status != "pending":
|
||||
logger.warning(
|
||||
"Checkpoint '%s' has hash_status=%s; skipping usage tracking",
|
||||
model_filename,
|
||||
hash_status,
|
||||
)
|
||||
return None
|
||||
|
||||
file_path = cached_entry.get("file_path") if cached_entry else None
|
||||
if not file_path:
|
||||
file_path = await self._find_checkpoint_file_on_disk(
|
||||
checkpoint_scanner, model_name
|
||||
)
|
||||
|
||||
if not file_path:
|
||||
logger.warning(
|
||||
f"No hash found for checkpoint '{model_filename}', skipping usage tracking"
|
||||
)
|
||||
return None
|
||||
|
||||
calculate_hash = getattr(checkpoint_scanner, "calculate_hash_for_model", None)
|
||||
if not callable(calculate_hash):
|
||||
logger.warning("Checkpoint scanner not available for usage tracking")
|
||||
return None
|
||||
|
||||
logger.info(
|
||||
"Calculating hash for checkpoint '%s' from %s",
|
||||
model_filename,
|
||||
file_path,
|
||||
)
|
||||
calculated_hash = await calculate_hash(file_path)
|
||||
if calculated_hash:
|
||||
return calculated_hash
|
||||
|
||||
logger.warning(
|
||||
f"Failed to calculate hash for checkpoint '{model_filename}', skipping usage tracking"
|
||||
)
|
||||
return None
|
||||
|
||||
async def _process_checkpoints(self, models_data, today_date):
|
||||
"""Process checkpoint models from metadata"""
|
||||
@@ -311,27 +468,12 @@ class UsageStats:
|
||||
model_name = model_info.get("name")
|
||||
if not model_name:
|
||||
continue
|
||||
|
||||
# Clean up filename (remove extension if present)
|
||||
model_filename = os.path.splitext(os.path.basename(model_name))[0]
|
||||
|
||||
# Get hash for this checkpoint
|
||||
model_hash = checkpoint_scanner.get_hash_by_filename(model_filename)
|
||||
if model_hash:
|
||||
# Update stats for this checkpoint with date tracking
|
||||
if model_hash not in self.stats["checkpoints"]:
|
||||
self.stats["checkpoints"][model_hash] = {
|
||||
"total": 0,
|
||||
"history": {}
|
||||
}
|
||||
|
||||
# Increment total count
|
||||
self.stats["checkpoints"][model_hash]["total"] += 1
|
||||
|
||||
# Increment today's count
|
||||
if today_date not in self.stats["checkpoints"][model_hash]["history"]:
|
||||
self.stats["checkpoints"][model_hash]["history"][today_date] = 0
|
||||
self.stats["checkpoints"][model_hash]["history"][today_date] += 1
|
||||
|
||||
model_hash = await self._resolve_checkpoint_hash(checkpoint_scanner, model_name)
|
||||
if not model_hash:
|
||||
continue
|
||||
|
||||
self._increment_usage_counter("checkpoints", model_hash, today_date)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing checkpoint usage: {e}", exc_info=True)
|
||||
|
||||
@@ -360,21 +502,11 @@ class UsageStats:
|
||||
|
||||
# Get hash for this LoRA
|
||||
lora_hash = lora_scanner.get_hash_by_filename(lora_name)
|
||||
if lora_hash:
|
||||
# Update stats for this LoRA with date tracking
|
||||
if lora_hash not in self.stats["loras"]:
|
||||
self.stats["loras"][lora_hash] = {
|
||||
"total": 0,
|
||||
"history": {}
|
||||
}
|
||||
|
||||
# Increment total count
|
||||
self.stats["loras"][lora_hash]["total"] += 1
|
||||
|
||||
# Increment today's count
|
||||
if today_date not in self.stats["loras"][lora_hash]["history"]:
|
||||
self.stats["loras"][lora_hash]["history"][today_date] = 0
|
||||
self.stats["loras"][lora_hash]["history"][today_date] += 1
|
||||
if not lora_hash:
|
||||
logger.warning(f"No hash found for LoRA '{lora_name}', skipping usage tracking")
|
||||
continue
|
||||
|
||||
self._increment_usage_counter("loras", lora_hash, today_date)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing LoRA usage: {e}", exc_info=True)
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[project]
|
||||
name = "comfyui-lora-manager"
|
||||
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
|
||||
version = "1.0.0"
|
||||
version = "1.0.5"
|
||||
license = {file = "LICENSE"}
|
||||
dependencies = [
|
||||
"aiohttp",
|
||||
@@ -14,7 +14,8 @@ dependencies = [
|
||||
"natsort",
|
||||
"GitPython",
|
||||
"aiosqlite",
|
||||
"platformdirs"
|
||||
"platformdirs",
|
||||
"pyyaml"
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
|
||||
@@ -11,3 +11,4 @@ GitPython
|
||||
aiosqlite
|
||||
beautifulsoup4
|
||||
platformdirs
|
||||
pyyaml
|
||||
|
||||
354
scripts/migrate_legacy_metadata.py
Normal file
354
scripts/migrate_legacy_metadata.py
Normal file
@@ -0,0 +1,354 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Migrate metadata from old sidecar JSON format to LoRA Manager's metadata.json format.
|
||||
|
||||
This script automatically discovers model folders from LoRA Manager's settings.json,
|
||||
finds JSON files with the same basename as model files (e.g., `model.json` for
|
||||
`model.safetensors`), and migrates their content to the corresponding `.metadata.json` files.
|
||||
|
||||
Fields migrated:
|
||||
- "activation text" → civitai.trainedWords (array of trigger words)
|
||||
- "preferred weight" → usage_tips.strength (LoRA only, skipped for Checkpoint)
|
||||
- "notes" → notes (user-defined notes)
|
||||
|
||||
Supported model types: LoRA, Checkpoint
|
||||
|
||||
Usage:
|
||||
python scripts/migrate_legacy_metadata.py [--dry-run] [--verbose]
|
||||
|
||||
The script will:
|
||||
1. Read settings.json to find all configured model folders
|
||||
2. Recursively scan for model files (.safetensors, .ckpt, .pt, .pth, .bin)
|
||||
3. Find corresponding legacy metadata JSON files
|
||||
4. Migrate data to .metadata.json files
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(levelname)s - %(message)s",
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
APP_NAME = "ComfyUI-LoRA-Manager"
|
||||
MODEL_EXTENSIONS = {".safetensors", ".ckpt", ".pt", ".pth", ".bin"}
|
||||
SECRET_PATTERN = re.compile(r"(key|token|secret|password|auth|credential)", re.IGNORECASE)
|
||||
|
||||
|
||||
def resolve_settings_path() -> Path:
|
||||
repo_root = Path(__file__).parent.parent.resolve()
|
||||
portable = repo_root / "settings.json"
|
||||
if portable.exists():
|
||||
payload = load_json(portable)
|
||||
if isinstance(payload, dict) and payload.get("use_portable_settings") is True:
|
||||
return portable
|
||||
|
||||
config_home = os.environ.get("XDG_CONFIG_HOME")
|
||||
if config_home:
|
||||
return Path(config_home).expanduser() / APP_NAME / "settings.json"
|
||||
return Path.home() / ".config" / APP_NAME / "settings.json"
|
||||
|
||||
|
||||
def load_json(path: Path) -> dict[str, Any]:
|
||||
try:
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except FileNotFoundError:
|
||||
return {}
|
||||
except json.JSONDecodeError as exc:
|
||||
logger.error(f"Invalid JSON in {path}: {exc}")
|
||||
return {}
|
||||
except OSError as exc:
|
||||
logger.error(f"Cannot read {path}: {exc}")
|
||||
return {}
|
||||
|
||||
|
||||
def expand_path(value: str) -> str:
|
||||
return str(Path(value).expanduser().resolve(strict=False))
|
||||
|
||||
|
||||
def normalize_path_list(value: Any) -> list[str]:
|
||||
if isinstance(value, str):
|
||||
return [expand_path(value)] if value else []
|
||||
if isinstance(value, list):
|
||||
return [expand_path(item) for item in value if isinstance(item, str) and item]
|
||||
return []
|
||||
|
||||
|
||||
def dedupe(values: list[str]) -> list[str]:
|
||||
seen: set[str] = set()
|
||||
result: list[str] = []
|
||||
for value in values:
|
||||
if value not in seen:
|
||||
result.append(value)
|
||||
seen.add(value)
|
||||
return result
|
||||
|
||||
|
||||
def get_model_roots(settings: dict[str, Any]) -> dict[str, list[str]]:
|
||||
roots: dict[str, list[str]] = {}
|
||||
active_library = settings.get("active_library") or "default"
|
||||
sources = [settings]
|
||||
library = settings.get("libraries", {}).get(active_library)
|
||||
if isinstance(library, dict):
|
||||
sources.insert(0, library)
|
||||
for source in sources:
|
||||
folder_paths = source.get("folder_paths")
|
||||
if isinstance(folder_paths, dict):
|
||||
for key, value in folder_paths.items():
|
||||
roots.setdefault(key, []).extend(normalize_path_list(value))
|
||||
for default_key, folder_key in (
|
||||
("default_lora_root", "loras"),
|
||||
("default_checkpoint_root", "checkpoints"),
|
||||
("default_embedding_root", "embeddings"),
|
||||
("default_unet_root", "unet"),
|
||||
):
|
||||
value = settings.get(default_key)
|
||||
if isinstance(value, str) and value:
|
||||
roots.setdefault(folder_key, []).append(expand_path(value))
|
||||
return {key: dedupe(values) for key, values in roots.items()}
|
||||
|
||||
|
||||
def find_model_files(directory: Path) -> list[Path]:
|
||||
model_files = []
|
||||
for ext in MODEL_EXTENSIONS:
|
||||
model_files.extend(directory.rglob(f"*{ext}"))
|
||||
return model_files
|
||||
|
||||
|
||||
def find_legacy_metadata(model_path: Path) -> Path | None:
|
||||
base_name = model_path.stem
|
||||
legacy_path = model_path.with_name(f"{base_name}.json")
|
||||
if legacy_path.exists() and legacy_path.is_file():
|
||||
return legacy_path
|
||||
return None
|
||||
|
||||
|
||||
def load_legacy_metadata(legacy_path: Path) -> dict[str, Any] | None:
|
||||
try:
|
||||
with open(legacy_path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Invalid JSON in legacy file {legacy_path}: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading legacy file {legacy_path}: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def load_metadata(metadata_path: Path) -> dict[str, Any]:
|
||||
if not metadata_path.exists():
|
||||
return {}
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except json.JSONDecodeError as e:
|
||||
logger.warning(f"Invalid JSON in metadata file {metadata_path}: {e}. Starting fresh.")
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading metadata file {metadata_path}: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def save_metadata(metadata_path: Path, data: dict[str, Any], dry_run: bool = False) -> bool:
|
||||
if dry_run:
|
||||
logger.info(f"[DRY RUN] Would save metadata to: {metadata_path}")
|
||||
return True
|
||||
temp_path = metadata_path.with_suffix(".tmp")
|
||||
try:
|
||||
with open(temp_path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
os.replace(temp_path, metadata_path)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving metadata to {metadata_path}: {e}")
|
||||
if temp_path.exists():
|
||||
try:
|
||||
temp_path.unlink()
|
||||
except:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
def migrate_metadata(
|
||||
legacy_data: dict[str, Any],
|
||||
existing_metadata: dict[str, Any],
|
||||
model_type: str
|
||||
) -> dict[str, Any] | None:
|
||||
metadata = existing_metadata.copy()
|
||||
changes_made = False
|
||||
if "civitai" not in metadata:
|
||||
metadata["civitai"] = {}
|
||||
activation_text = legacy_data.get("activation text")
|
||||
if activation_text and isinstance(activation_text, str):
|
||||
trigger_words = [
|
||||
word.strip()
|
||||
for word in activation_text.replace("\n", ",").split(",")
|
||||
if word.strip()
|
||||
]
|
||||
if trigger_words:
|
||||
existing_trained = metadata["civitai"].get("trainedWords", [])
|
||||
if not isinstance(existing_trained, list):
|
||||
existing_trained = []
|
||||
merged = list(dict.fromkeys(existing_trained + trigger_words))
|
||||
if merged != existing_trained:
|
||||
metadata["civitai"]["trainedWords"] = merged
|
||||
changes_made = True
|
||||
logger.debug(f" Migrated activation text: {trigger_words}")
|
||||
if model_type == "lora":
|
||||
preferred_weight = legacy_data.get("preferred weight")
|
||||
if preferred_weight is not None:
|
||||
try:
|
||||
weight_value = float(preferred_weight)
|
||||
usage_tips_str = metadata.get("usage_tips", "{}")
|
||||
if isinstance(usage_tips_str, str):
|
||||
try:
|
||||
usage_tips = json.loads(usage_tips_str)
|
||||
except json.JSONDecodeError:
|
||||
usage_tips = {}
|
||||
elif isinstance(usage_tips_str, dict):
|
||||
usage_tips = usage_tips_str
|
||||
else:
|
||||
usage_tips = {}
|
||||
if "strength" not in usage_tips:
|
||||
usage_tips["strength"] = weight_value
|
||||
metadata["usage_tips"] = json.dumps(usage_tips, ensure_ascii=False)
|
||||
changes_made = True
|
||||
logger.debug(f" Migrated preferred weight: {weight_value}")
|
||||
except (ValueError, TypeError) as e:
|
||||
logger.warning(f" Could not parse preferred weight '{preferred_weight}': {e}")
|
||||
else:
|
||||
if legacy_data.get("preferred weight") is not None:
|
||||
logger.debug(" Skipping 'preferred weight' for non-LoRA model")
|
||||
notes = legacy_data.get("notes")
|
||||
if notes and isinstance(notes, str) and notes.strip():
|
||||
existing_notes = metadata.get("notes", "")
|
||||
if not existing_notes:
|
||||
metadata["notes"] = notes.strip()
|
||||
changes_made = True
|
||||
logger.debug(" Migrated notes")
|
||||
elif notes.strip() not in existing_notes:
|
||||
metadata["notes"] = f"{existing_notes}\n\n{notes.strip()}".strip()
|
||||
changes_made = True
|
||||
logger.debug(" Appended notes")
|
||||
return metadata if changes_made else None
|
||||
|
||||
|
||||
def process_model(model_path: Path, model_type: str, dry_run: bool = False) -> bool:
|
||||
legacy_path = find_legacy_metadata(model_path)
|
||||
if not legacy_path:
|
||||
return True
|
||||
logger.info(f"Processing: {model_path.name} ({model_type})")
|
||||
logger.info(f" Found legacy metadata: {legacy_path.name}")
|
||||
legacy_data = load_legacy_metadata(legacy_path)
|
||||
if legacy_data is None:
|
||||
return False
|
||||
metadata_path = model_path.with_suffix(".metadata.json")
|
||||
existing_metadata = load_metadata(metadata_path)
|
||||
migrated = migrate_metadata(legacy_data, existing_metadata, model_type)
|
||||
if migrated is None:
|
||||
logger.info(" No changes needed (fields already exist or no migratable data)")
|
||||
return True
|
||||
if save_metadata(metadata_path, migrated, dry_run):
|
||||
logger.info(f" ✓ Successfully migrated metadata to: {metadata_path.name}")
|
||||
return True
|
||||
else:
|
||||
logger.error(" ✗ Failed to save metadata")
|
||||
return False
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Migrate legacy metadata JSON files to LoRA Manager's metadata.json format.",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
python scripts/migrate_legacy_metadata.py
|
||||
python scripts/migrate_legacy_metadata.py --dry-run
|
||||
python scripts/migrate_legacy_metadata.py --verbose
|
||||
"""
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="Preview changes without modifying any files"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-v", "--verbose",
|
||||
action="store_true",
|
||||
help="Enable verbose output"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
if args.verbose:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
settings_path = resolve_settings_path()
|
||||
logger.info(f"Using settings: {settings_path}")
|
||||
settings = load_json(settings_path)
|
||||
if not settings:
|
||||
logger.error("Could not load settings.json. Please ensure LoRA Manager is configured.")
|
||||
return 1
|
||||
roots = get_model_roots(settings)
|
||||
if not roots:
|
||||
logger.error("No model folders configured in settings.json.")
|
||||
return 1
|
||||
lora_roots = roots.get("loras", [])
|
||||
checkpoint_roots = roots.get("checkpoints", []) + roots.get("unet", [])
|
||||
all_roots = []
|
||||
for root_list in [lora_roots, checkpoint_roots]:
|
||||
for root in root_list:
|
||||
path = Path(root)
|
||||
if path.exists() and path.is_dir():
|
||||
all_roots.append((path, "lora" if root in lora_roots else "checkpoint"))
|
||||
if not all_roots:
|
||||
logger.error("No valid model folders found.")
|
||||
return 1
|
||||
logger.info(f"Found {len(lora_roots)} LoRA root(s), {len(checkpoint_roots)} Checkpoint root(s)")
|
||||
processed = 0
|
||||
migrated = 0
|
||||
errors = 0
|
||||
skipped = 0
|
||||
lora_count = 0
|
||||
checkpoint_count = 0
|
||||
for root_path, model_type in all_roots:
|
||||
logger.info(f"Scanning: {root_path} ({model_type})")
|
||||
model_files = find_model_files(root_path)
|
||||
logger.debug(f" Found {len(model_files)} model files")
|
||||
for model_path in model_files:
|
||||
legacy_path = find_legacy_metadata(model_path)
|
||||
if not legacy_path:
|
||||
skipped += 1
|
||||
continue
|
||||
processed += 1
|
||||
if process_model(model_path, model_type, dry_run=args.dry_run):
|
||||
migrated += 1
|
||||
if model_type == "lora":
|
||||
lora_count += 1
|
||||
else:
|
||||
checkpoint_count += 1
|
||||
else:
|
||||
errors += 1
|
||||
logger.info("\n" + "=" * 50)
|
||||
logger.info("Migration Summary:")
|
||||
logger.info(f" Models with legacy metadata: {processed}")
|
||||
logger.info(f" Successfully migrated: {migrated}")
|
||||
logger.info(f" - LoRA models: {lora_count}")
|
||||
logger.info(f" - Checkpoint models: {checkpoint_count}")
|
||||
logger.info(f" Errors: {errors}")
|
||||
logger.info(f" Skipped (no legacy file): {skipped}")
|
||||
if args.dry_run:
|
||||
logger.info("\n [DRY RUN MODE - No files were modified]")
|
||||
return 0 if errors == 0 else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -15,5 +15,8 @@
|
||||
"C:/path/to/another/embeddings_folder"
|
||||
]
|
||||
},
|
||||
"example_images_open_mode": "system",
|
||||
"example_images_local_root": "",
|
||||
"example_images_open_uri_template": "",
|
||||
"auto_organize_exclusions": []
|
||||
}
|
||||
|
||||
@@ -113,6 +113,8 @@ import asyncio
|
||||
import logging
|
||||
from aiohttp import web
|
||||
|
||||
from py.utils.session_logging import setup_standalone_session_logging
|
||||
|
||||
# Increase allowable header size to align with in-ComfyUI configuration.
|
||||
HEADER_SIZE_LIMIT = 16384
|
||||
|
||||
@@ -125,6 +127,8 @@ logger = logging.getLogger("lora-manager-standalone")
|
||||
# Configure aiohttp access logger to be less verbose
|
||||
logging.getLogger("aiohttp.access").setLevel(logging.WARNING)
|
||||
|
||||
setup_standalone_session_logging(ensure_settings_file(logger))
|
||||
|
||||
|
||||
# Add specific suppression for connection reset errors
|
||||
class ConnectionResetFilter(logging.Filter):
|
||||
|
||||
@@ -243,3 +243,58 @@
|
||||
-ms-user-select: none;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.excluded-view-banner {
|
||||
margin-bottom: var(--space-2);
|
||||
padding: 12px 16px;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--border-radius-sm);
|
||||
background: linear-gradient(
|
||||
135deg,
|
||||
oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.08),
|
||||
var(--card-bg)
|
||||
);
|
||||
}
|
||||
|
||||
.excluded-view-banner__content {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
}
|
||||
|
||||
.excluded-view-banner__title {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
font-weight: 600;
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.excluded-view-banner__back {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
border: 1px solid var(--border-color);
|
||||
background: var(--card-bg);
|
||||
color: var(--text-color);
|
||||
border-radius: var(--border-radius-xs);
|
||||
padding: 8px 12px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.excluded-view-banner__back:hover {
|
||||
border-color: var(--lora-accent);
|
||||
color: var(--lora-accent);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.excluded-view-banner__content {
|
||||
flex-direction: column;
|
||||
align-items: stretch;
|
||||
}
|
||||
|
||||
.excluded-view-banner__back {
|
||||
justify-content: center;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,7 +87,7 @@
|
||||
|
||||
.checkbox-label input[type="checkbox"]:checked + .checkmark::after {
|
||||
content: '\f00c';
|
||||
font-family: 'Font Awesome 6 Free';
|
||||
font-family: 'Font Awesome 6 Free', sans-serif;
|
||||
font-weight: 900;
|
||||
color: var(--lora-text);
|
||||
font-size: 12px;
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
transition: transform 160ms ease-out;
|
||||
aspect-ratio: 896/1152; /* Preserve aspect ratio */
|
||||
max-width: 260px; /* Base size */
|
||||
min-width: 200px; /* Prevent cards from becoming too narrow */
|
||||
width: 100%;
|
||||
margin: 0 auto;
|
||||
cursor: pointer;
|
||||
@@ -328,7 +329,6 @@
|
||||
}
|
||||
|
||||
.card-actions i {
|
||||
margin-left: var(--space-1);
|
||||
cursor: pointer;
|
||||
color: white;
|
||||
transition: opacity 0.2s, transform 0.15s ease;
|
||||
@@ -370,7 +370,16 @@
|
||||
text-shadow: 0 0 5px rgba(255, 193, 7, 0.5);
|
||||
}
|
||||
|
||||
/* 响应式设计 */
|
||||
@media (max-width: 1200px) {
|
||||
.card-grid {
|
||||
grid-template-columns: repeat(auto-fill, minmax(220px, 1fr));
|
||||
}
|
||||
|
||||
.model-card {
|
||||
max-width: 240px;
|
||||
min-width: 180px;
|
||||
}
|
||||
}
|
||||
@media (max-width: 768px) {
|
||||
.card-grid {
|
||||
grid-template-columns: minmax(260px, 1fr); /* Adjusted minimum size for mobile */
|
||||
@@ -378,6 +387,7 @@
|
||||
|
||||
.model-card {
|
||||
max-width: 100%; /* Allow cards to fill available space on mobile */
|
||||
min-width: 200px;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -507,6 +517,11 @@
|
||||
font-size: 0.75em;
|
||||
}
|
||||
|
||||
/* Hide civitai version name when setting is disabled */
|
||||
body.hide-card-version .civitai-version {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* Prevent text selection on cards and interactive elements */
|
||||
.model-card,
|
||||
.model-card *,
|
||||
@@ -558,8 +573,13 @@
|
||||
position: absolute;
|
||||
box-sizing: border-box;
|
||||
transition: transform 160ms ease-out;
|
||||
margin: 0; /* Remove margins, positioning is handled by VirtualScroller */
|
||||
width: 100%; /* Allow width to be set by the VirtualScroller */
|
||||
margin: 0;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* Allow cards to grow beyond 260px in virtual scroll mode */
|
||||
.virtual-scroll-item.model-card {
|
||||
max-width: none;
|
||||
}
|
||||
|
||||
.virtual-scroll-item:hover {
|
||||
@@ -571,11 +591,11 @@
|
||||
.card-grid.virtual-scroll {
|
||||
display: block;
|
||||
position: relative;
|
||||
margin: 0 auto;
|
||||
margin: 0; /* Remove auto margins - positioning handled by VirtualScroller leftOffset */
|
||||
padding: 4px 0; /* Add top/bottom padding equivalent to card padding */
|
||||
height: auto;
|
||||
width: 100%;
|
||||
max-width: 1400px; /* Keep the max-width from original grid */
|
||||
max-width: none; /* Remove max-width constraint - handled by VirtualScroller */
|
||||
box-sizing: border-box; /* Include padding in width calculation */
|
||||
overflow-x: hidden; /* Prevent horizontal overflow */
|
||||
}
|
||||
@@ -680,3 +700,22 @@
|
||||
margin-left: 0;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.excluded-model {
|
||||
border-style: dashed;
|
||||
}
|
||||
|
||||
.model-excluded-badge {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
padding: 0;
|
||||
border-radius: 3px;
|
||||
background: color-mix(in oklab, var(--warning-color, #d97706) 85%, white 15%);
|
||||
color: white;
|
||||
font-size: 0.65rem;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
opacity: 0.9;
|
||||
}
|
||||
|
||||
@@ -19,6 +19,23 @@
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
height: 100%;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
/* Left section: Logo + Navigation */
|
||||
.header-left {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
/* Right section: Controls */
|
||||
.header-right {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
/* Responsive header container for larger screens */
|
||||
@@ -65,7 +82,6 @@
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-shrink: 0;
|
||||
margin-right: 1rem;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
@@ -77,6 +93,7 @@
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
font-size: 0.9rem;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.nav-item:hover,
|
||||
@@ -97,14 +114,99 @@
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* Header search */
|
||||
/* Header search - Centered with VS Code command palette style */
|
||||
.header-search {
|
||||
flex: 1;
|
||||
max-width: 400px;
|
||||
margin: 0 1rem;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
max-width: 600px;
|
||||
margin: 0 auto;
|
||||
transition: opacity 0.2s ease;
|
||||
}
|
||||
|
||||
/* VS Code command palette style search container */
|
||||
.header-search .search-container {
|
||||
width: 100%;
|
||||
max-width: 600px;
|
||||
position: relative;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
background: var(--input-bg, var(--card-bg));
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--border-radius-sm, 6px);
|
||||
transition: all 0.2s ease;
|
||||
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.header-search .search-container:focus-within {
|
||||
border-color: var(--lora-accent);
|
||||
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08), 0 0 0 1px var(--lora-accent);
|
||||
}
|
||||
|
||||
.header-search input {
|
||||
flex: 1;
|
||||
width: 100%;
|
||||
padding: 0.5rem 0.75rem;
|
||||
padding-left: 2.25rem !important;
|
||||
padding-right: 5rem !important;
|
||||
border: none;
|
||||
background: transparent;
|
||||
color: var(--text-color);
|
||||
font-size: 0.95rem;
|
||||
outline: none;
|
||||
}
|
||||
|
||||
.header-search input::placeholder {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.header-search .search-icon {
|
||||
position: absolute;
|
||||
left: 0.75rem;
|
||||
color: var(--text-muted);
|
||||
font-size: 0.9rem;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.header-search .search-options-toggle,
|
||||
.header-search .search-filter-toggle {
|
||||
position: absolute;
|
||||
right: 0.5rem;
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
background: transparent;
|
||||
border: none;
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
border-radius: var(--border-radius-xs, 4px);
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.header-search .search-options-toggle {
|
||||
right: 2.25rem;
|
||||
}
|
||||
|
||||
.header-search .search-options-toggle:hover,
|
||||
.header-search .search-filter-toggle:hover {
|
||||
background: var(--lora-surface-hover, oklch(95% 0.02 256));
|
||||
color: var(--lora-accent);
|
||||
}
|
||||
|
||||
.header-search .filter-badge {
|
||||
position: absolute;
|
||||
top: 2px;
|
||||
right: 2px;
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
background: var(--lora-accent);
|
||||
border-radius: 50%;
|
||||
font-size: 0;
|
||||
}
|
||||
|
||||
/* Disabled state for header search */
|
||||
.header-search.disabled {
|
||||
opacity: 0.5;
|
||||
@@ -248,44 +350,207 @@
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
/* Mobile adjustments */
|
||||
@media (max-width: 768px) {
|
||||
.app-title {
|
||||
display: none;
|
||||
/* Hide text title on mobile */
|
||||
/* Hamburger menu button - hidden by default */
|
||||
.hamburger-menu-btn {
|
||||
display: none;
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
border-radius: 50%;
|
||||
background: var(--card-bg);
|
||||
border: 1px solid var(--border-color);
|
||||
color: var(--text-color);
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.hamburger-menu-btn:hover {
|
||||
background: var(--lora-accent);
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* Hamburger dropdown menu */
|
||||
.hamburger-dropdown {
|
||||
display: none;
|
||||
position: absolute;
|
||||
top: 100%;
|
||||
right: 0;
|
||||
margin-top: 8px;
|
||||
background: var(--card-bg);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--border-radius-sm, 6px);
|
||||
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.15);
|
||||
padding: 0.5rem;
|
||||
min-width: 160px;
|
||||
z-index: var(--z-dropdown, 200);
|
||||
}
|
||||
|
||||
.hamburger-dropdown.active {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.25rem;
|
||||
}
|
||||
|
||||
.hamburger-dropdown .dropdown-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
padding: 0.5rem 0.75rem;
|
||||
border-radius: var(--border-radius-xs, 4px);
|
||||
color: var(--text-color);
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
font-size: 0.9rem;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.hamburger-dropdown .dropdown-item:hover {
|
||||
background: var(--lora-surface-hover, oklch(95% 0.02 256));
|
||||
color: var(--lora-accent);
|
||||
}
|
||||
|
||||
.hamburger-dropdown .dropdown-item i {
|
||||
width: 20px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.hamburger-dropdown .dropdown-divider {
|
||||
height: 1px;
|
||||
background: var(--border-color);
|
||||
margin: 0.25rem 0;
|
||||
}
|
||||
|
||||
/* Responsive: Early optimization at 1200px - reduce gaps and padding */
|
||||
@media (max-width: 1200px) {
|
||||
.header-container {
|
||||
gap: 0.75rem;
|
||||
padding: 0 12px;
|
||||
}
|
||||
|
||||
.main-nav {
|
||||
gap: 0.25rem;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
padding: 0.25rem 0.5rem;
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
gap: 4px;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.header-controls>div {
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
.header-controls > div {
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
}
|
||||
}
|
||||
|
||||
/* Responsive: Hide nav icons at 1100px to save space */
|
||||
@media (max-width: 1100px) {
|
||||
.nav-item {
|
||||
gap: 0;
|
||||
padding: 0.25rem 0.4rem;
|
||||
}
|
||||
|
||||
.nav-item i {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.header-search {
|
||||
max-width: 450px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 950px) {
|
||||
.app-title {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.header-container {
|
||||
padding: 0 10px;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
.header-controls {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.hamburger-menu-btn {
|
||||
display: flex !important;
|
||||
}
|
||||
|
||||
.hamburger-dropdown {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.hamburger-dropdown.active {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.header-search {
|
||||
max-width: none;
|
||||
margin: 0 0.5rem;
|
||||
margin: 0;
|
||||
flex: 1;
|
||||
min-width: 200px;
|
||||
}
|
||||
|
||||
.main-nav {
|
||||
margin-right: 0.5rem;
|
||||
gap: 0.25rem;
|
||||
margin-right: 0;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
padding: 0.25rem 0.35rem;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* For very small screens */
|
||||
/* Responsive: Compact mode at 768px */
|
||||
@media (max-width: 768px) {
|
||||
.header-search input {
|
||||
padding: 0.4rem 0.6rem;
|
||||
padding-left: 2rem !important;
|
||||
padding-right: 4.5rem !important;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.header-search .search-container {
|
||||
border-radius: var(--border-radius-xs, 4px);
|
||||
}
|
||||
}
|
||||
|
||||
/* For very small screens - switch nav to icons only */
|
||||
@media (max-width: 600px) {
|
||||
.header-container {
|
||||
padding: 0 8px;
|
||||
gap: 0.4rem;
|
||||
}
|
||||
|
||||
.main-nav {
|
||||
display: none;
|
||||
/* Hide navigation on very small screens */
|
||||
display: flex;
|
||||
gap: 0.15rem;
|
||||
margin-right: 0;
|
||||
}
|
||||
|
||||
.header-search {
|
||||
flex: 1;
|
||||
.nav-item {
|
||||
padding: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
}
|
||||
}
|
||||
|
||||
.nav-item span {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.nav-item i {
|
||||
display: block;
|
||||
font-size: 1rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* Position relative for hamburger menu positioning */
|
||||
.header-right {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
@@ -140,9 +140,11 @@
|
||||
|
||||
/* Add specific styles for notes content */
|
||||
.info-item.notes .editable-field [contenteditable] {
|
||||
height: 60px; /* Keep initial modal layout stable regardless of note length */
|
||||
min-height: 60px; /* Increase height for multiple lines */
|
||||
max-height: 150px; /* Limit maximum height */
|
||||
overflow-y: auto; /* Add scrolling for long content */
|
||||
max-height: 420px; /* Limit maximum height */
|
||||
overflow: auto; /* Enable scrolling and resize handle for long content */
|
||||
resize: vertical; /* Allow manual vertical resizing */
|
||||
white-space: pre-wrap; /* Preserve line breaks */
|
||||
line-height: 1.5; /* Improve readability */
|
||||
padding: 8px 12px; /* Slightly increase padding */
|
||||
@@ -835,7 +837,8 @@
|
||||
}
|
||||
|
||||
[data-theme="dark"] .creator-info,
|
||||
[data-theme="dark"] .civitai-view {
|
||||
[data-theme="dark"] .civitai-view,
|
||||
[data-theme="dark"] .modal-send-btn {
|
||||
background: rgba(255, 255, 255, 0.03);
|
||||
border: 1px solid var(--lora-border);
|
||||
}
|
||||
@@ -875,7 +878,8 @@
|
||||
|
||||
/* Add hover effect for creator info */
|
||||
.creator-info:hover,
|
||||
.civitai-view:hover {
|
||||
.civitai-view:hover,
|
||||
.modal-send-btn:hover {
|
||||
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
|
||||
border-color: var(--lora-accent);
|
||||
transform: translateY(-1px);
|
||||
@@ -910,3 +914,42 @@
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
/* Send to ComfyUI Button */
|
||||
.modal-send-btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 12px;
|
||||
background: rgba(0, 0, 0, 0.03);
|
||||
border: 1px solid rgba(0, 0, 0, 0.1);
|
||||
border-radius: var(--border-radius-sm);
|
||||
color: var(--text-color);
|
||||
cursor: pointer;
|
||||
font-weight: 500;
|
||||
font-size: 0.9em;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .modal-send-btn {
|
||||
background: rgba(255, 255, 255, 0.03);
|
||||
border: 1px solid var(--lora-border);
|
||||
}
|
||||
|
||||
.modal-send-btn:hover {
|
||||
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
|
||||
border-color: var(--lora-accent);
|
||||
transform: translateY(-1px);
|
||||
}
|
||||
|
||||
.modal-send-btn:active {
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
.modal-send-btn i {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.modal-send-btn span {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
@@ -53,6 +53,10 @@
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.trigger-word-tag:not(.is-editing) {
|
||||
transition: background-color 0.2s ease, border-color 0.2s ease;
|
||||
}
|
||||
|
||||
.trigger-word-content {
|
||||
color: var(--lora-accent) !important;
|
||||
font-size: 0.85em;
|
||||
@@ -65,6 +69,38 @@
|
||||
border-color: var(--lora-accent);
|
||||
}
|
||||
|
||||
.trigger-words.edit-mode .trigger-word-tag {
|
||||
cursor: text;
|
||||
}
|
||||
|
||||
.trigger-word-tag.is-editing {
|
||||
align-items: center;
|
||||
flex: 0 1 min(var(--trigger-word-edit-width, 48ch), 100%);
|
||||
width: min(var(--trigger-word-edit-width, 48ch), 100%);
|
||||
height: var(--trigger-word-edit-height, auto);
|
||||
border-color: var(--lora-accent);
|
||||
transition: none;
|
||||
}
|
||||
|
||||
.trigger-word-edit-input {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
min-width: 0;
|
||||
box-sizing: border-box;
|
||||
padding: 1px 2px;
|
||||
border: none;
|
||||
resize: none;
|
||||
overflow: auto;
|
||||
outline: none;
|
||||
background: transparent;
|
||||
color: var(--lora-accent);
|
||||
font: inherit;
|
||||
font-size: 0.85em;
|
||||
line-height: 1.4;
|
||||
white-space: pre-wrap;
|
||||
overflow-wrap: anywhere;
|
||||
}
|
||||
|
||||
.trigger-word-copy {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
@@ -109,4 +145,4 @@
|
||||
padding: 2px 5px;
|
||||
border-radius: 8px;
|
||||
white-space: nowrap;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -163,6 +163,18 @@
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.model-version-row.is-clickable .version-actions,
|
||||
.model-version-row.is-clickable .version-badges,
|
||||
.model-version-row.is-clickable .version-action,
|
||||
.model-version-row.is-clickable .version-civitai-link {
|
||||
cursor: default;
|
||||
}
|
||||
|
||||
.model-version-row.is-clickable .version-action,
|
||||
.model-version-row.is-clickable .version-civitai-link {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.model-version-row.is-current {
|
||||
border-color: var(--lora-accent);
|
||||
box-shadow: 0 0 0 1px color-mix(in oklch, var(--lora-accent) 65%, transparent),
|
||||
@@ -217,6 +229,7 @@
|
||||
gap: 8px;
|
||||
font-weight: 600;
|
||||
font-size: 0.95rem;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.versions-tab-version-name {
|
||||
@@ -226,6 +239,27 @@
|
||||
max-width: 100%;
|
||||
}
|
||||
|
||||
.version-civitai-link {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
border-radius: 999px;
|
||||
color: var(--text-muted);
|
||||
text-decoration: none;
|
||||
flex: 0 0 auto;
|
||||
transition: color 0.2s ease, background-color 0.2s ease, transform 0.2s ease;
|
||||
}
|
||||
|
||||
.version-civitai-link:hover,
|
||||
.version-civitai-link:focus-visible {
|
||||
color: var(--lora-accent);
|
||||
background: color-mix(in oklch, var(--lora-accent) 12%, transparent);
|
||||
transform: translateY(-1px);
|
||||
outline: none;
|
||||
}
|
||||
|
||||
.version-badges {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
@@ -340,6 +374,14 @@
|
||||
background: color-mix(in oklch, var(--lora-surface) 35%, transparent);
|
||||
}
|
||||
|
||||
.version-action-disabled {
|
||||
background: transparent;
|
||||
border-color: var(--border-color);
|
||||
color: var(--text-muted);
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.version-action:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
|
||||
@@ -151,7 +151,8 @@ body.modal-open {
|
||||
[data-theme="dark"] .changelog-section,
|
||||
[data-theme="dark"] .update-info,
|
||||
[data-theme="dark"] .info-item,
|
||||
[data-theme="dark"] .path-preview {
|
||||
[data-theme="dark"] .path-preview,
|
||||
[data-theme="dark"] #bulkDownloadMissingLorasModal .bulk-download-loras-preview {
|
||||
background: rgba(255, 255, 255, 0.03);
|
||||
border: 1px solid var(--lora-border);
|
||||
}
|
||||
@@ -310,6 +311,161 @@ button:disabled,
|
||||
color: var(--lora-error, #ef4444);
|
||||
}
|
||||
|
||||
.backup-status {
|
||||
background: rgba(0, 0, 0, 0.03);
|
||||
border: 1px solid rgba(0, 0, 0, 0.1);
|
||||
border-radius: var(--border-radius-sm);
|
||||
padding: var(--space-3);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .backup-status {
|
||||
background: rgba(255, 255, 255, 0.03);
|
||||
border: 1px solid var(--lora-border);
|
||||
}
|
||||
|
||||
.backup-summary-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(160px, 1fr));
|
||||
gap: var(--space-2);
|
||||
margin-bottom: var(--space-3);
|
||||
}
|
||||
|
||||
.backup-summary-card {
|
||||
background: rgba(255, 255, 255, 0.5);
|
||||
border: 1px solid rgba(0, 0, 0, 0.06);
|
||||
border-radius: var(--border-radius-sm);
|
||||
padding: var(--space-2);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .backup-summary-card {
|
||||
background: rgba(255, 255, 255, 0.02);
|
||||
border-color: rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
|
||||
.backup-summary-label {
|
||||
color: var(--text-color);
|
||||
font-size: 0.85rem;
|
||||
opacity: 0.7;
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
.backup-summary-value {
|
||||
color: var(--text-color);
|
||||
font-size: 1.1rem;
|
||||
font-weight: 600;
|
||||
line-height: 1.3;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.backup-summary-value.status-enabled {
|
||||
color: var(--lora-success, #10b981);
|
||||
}
|
||||
|
||||
.backup-summary-value.status-disabled {
|
||||
color: var(--lora-error, #ef4444);
|
||||
}
|
||||
|
||||
.backup-status-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: var(--space-2);
|
||||
}
|
||||
|
||||
.backup-status-row {
|
||||
display: grid;
|
||||
grid-template-columns: minmax(140px, 180px) 1fr;
|
||||
gap: var(--space-2);
|
||||
align-items: start;
|
||||
}
|
||||
|
||||
.backup-status-label {
|
||||
color: var(--text-color);
|
||||
font-weight: 500;
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.backup-status-content {
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.backup-status-primary {
|
||||
color: var(--text-color);
|
||||
font-weight: 600;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.backup-status-secondary {
|
||||
color: var(--text-color);
|
||||
opacity: 0.72;
|
||||
font-size: 0.88rem;
|
||||
line-height: 1.4;
|
||||
word-break: break-word;
|
||||
margin-top: 2px;
|
||||
}
|
||||
|
||||
.backup-location-details {
|
||||
border: 1px solid rgba(0, 0, 0, 0.1);
|
||||
border-radius: var(--border-radius-sm);
|
||||
background: rgba(0, 0, 0, 0.02);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .backup-location-details {
|
||||
border-color: var(--lora-border);
|
||||
background: rgba(255, 255, 255, 0.02);
|
||||
}
|
||||
|
||||
.backup-location-details summary {
|
||||
cursor: pointer;
|
||||
padding: var(--space-2) var(--space-3);
|
||||
color: var(--text-color);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.backup-location-panel {
|
||||
display: grid;
|
||||
grid-template-columns: minmax(0, 1fr) auto;
|
||||
gap: var(--space-2);
|
||||
align-items: center;
|
||||
width: 100%;
|
||||
max-width: 100%;
|
||||
box-sizing: border-box;
|
||||
padding: 0 var(--space-3) var(--space-3);
|
||||
}
|
||||
|
||||
.backup-location-panel .text-btn {
|
||||
justify-self: end;
|
||||
}
|
||||
|
||||
.backup-location-path {
|
||||
display: block;
|
||||
min-width: 0;
|
||||
max-width: 100%;
|
||||
padding: 6px 8px;
|
||||
border-radius: var(--border-radius-sm);
|
||||
background: rgba(0, 0, 0, 0.05);
|
||||
color: var(--text-color);
|
||||
overflow-wrap: anywhere;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .backup-location-path {
|
||||
background: rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.backup-status-row {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.backup-location-panel {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.backup-location-panel .text-btn {
|
||||
justify-self: start;
|
||||
}
|
||||
}
|
||||
|
||||
/* Add styles for delete preview image */
|
||||
.delete-preview {
|
||||
max-width: 150px;
|
||||
@@ -349,3 +505,87 @@ button:disabled,
|
||||
margin-top: var(--space-1);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Bulk Download Missing LoRAs Modal */
|
||||
#bulkDownloadMissingLorasModal .modal-body {
|
||||
padding: var(--space-3);
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .confirmation-message {
|
||||
color: var(--text-color);
|
||||
margin-bottom: var(--space-3);
|
||||
font-size: 1em;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .bulk-download-loras-preview {
|
||||
background: rgba(0, 0, 0, 0.03);
|
||||
border: 1px solid rgba(0, 0, 0, 0.1);
|
||||
border-radius: var(--border-radius-sm);
|
||||
padding: var(--space-3);
|
||||
margin-bottom: var(--space-3);
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .preview-title {
|
||||
font-weight: 600;
|
||||
margin-bottom: var(--space-2);
|
||||
color: var(--text-color);
|
||||
font-size: 0.95em;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .bulk-download-loras-list {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .bulk-download-loras-list li {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: var(--space-1) 0;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .bulk-download-loras-list li:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .bulk-download-loras-list li.more-items {
|
||||
font-style: italic;
|
||||
opacity: 0.7;
|
||||
text-align: center;
|
||||
justify-content: center;
|
||||
padding: var(--space-2) 0;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .lora-name {
|
||||
font-weight: 500;
|
||||
color: var(--text-color);
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .lora-version {
|
||||
font-size: 0.85em;
|
||||
opacity: 0.7;
|
||||
margin-left: var(--space-1);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .confirmation-note {
|
||||
display: flex;
|
||||
align-items: flex-start;
|
||||
gap: var(--space-2);
|
||||
padding: var(--space-2);
|
||||
background: rgba(59, 130, 246, 0.1);
|
||||
border-radius: var(--border-radius-sm);
|
||||
font-size: 0.9em;
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
#bulkDownloadMissingLorasModal .confirmation-note i {
|
||||
color: var(--lora-accent);
|
||||
margin-top: 2px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user