Compare commits

...

12 Commits

Author SHA1 Message Date
Will Miao
f09224152a feat: bump version to 0.9.11 2025-11-29 17:46:06 +08:00
Will Miao
df93670598 feat: add checkpoint metadata to EXIF recipe data
Add support for storing checkpoint information in image EXIF metadata. The checkpoint data is simplified and includes fields like model ID, version, name, hash, and base model. This allows for better tracking of AI model checkpoints used in image generation workflows.
2025-11-29 08:46:38 +08:00
Will Miao
073fb3a94a feat(recipe-parser): enhance LoRA metadata with local file matching
Add comprehensive local file matching for LoRA entries in recipe metadata:
- Add modelVersionId-based lookup via new _get_lora_from_version_index method
- Extend LoRA entry with additional fields: existsLocally, inLibrary, localPath, thumbnailUrl, size
- Improve local file detection by checking both SHA256 hash and modelVersionId
- Set default thumbnail URL and size values for missing LoRA files
- Add proper typing with Optional imports for better code clarity

This provides more accurate local file status and metadata for LoRA entries in recipes.
2025-11-29 08:29:05 +08:00
Will Miao
53c4165d82 feat(parser): enhance model metadata extraction in Automatic1111 parser
- Add MODEL_NAME_PATTERN regex to extract model names from parameters
- Extract model hash from parsed hashes when available in metadata
- Add checkpoint model hash and name extraction from parameters section
- Implement checkpoint resource processing from Civitai metadata
- Improve model information completeness for better recipe tracking
2025-11-29 08:13:55 +08:00
Will Miao
8cd4550189 feat: add Flux.2 D and ZImageTurbo model constants
Add new model constants for Flux.2 D and ZImageTurbo to the BASE_MODELS object,
along with their corresponding abbreviations in BASE_MODEL_ABBREVIATIONS. Also
include these new models in the appropriate categories within BASE_MODEL_CATEGORIES.

This update ensures the application can properly recognize and handle these
newly supported AI models in the system.
2025-11-28 11:42:46 +08:00
Will Miao
2b2e4fefab feat(tests): restructure test HTML to nest elements under model modal
Refactor the test HTML structure to properly nest all model metadata elements within the model modal container. This improves test accuracy by matching the actual DOM structure used in the application, ensuring that element selection and event handling work correctly during testing.
2025-11-27 20:44:05 +08:00
Will Miao
5f93648297 feat: scope DOM queries to modal element in ModelMetadata
Refactor updateModalFilePathReferences function to scope all DOM queries within the modal element. This prevents potential conflicts with other elements on the page that might have the same CSS selectors. Added helper functions scopedQuery and scopedQueryAll to limit element selection to the modal context, improving reliability and preventing unintended side effects.
2025-11-27 20:33:04 +08:00
pixelpaws
8a628f0bd0 Merge pull request #703 from willmiao/fix/showcase-listener-leaks
fix(showcase): tear down modal listeners
2025-11-27 20:09:45 +08:00
Will Miao
b67c8598d6 feat(metadata): clear stale cache entries when metadata is empty
Update metadata registry to remove cache entries when node metadata becomes empty instead of keeping stale data. This prevents accumulation of unused cache entries and ensures cache only contains valid metadata. Added test case to verify cache behavior when LoRA configurations are removed.
2025-11-27 20:04:38 +08:00
Will Miao
0254c9d0e9 fix(showcase): tear down modal listeners 2025-11-27 18:00:59 +08:00
Will Miao
ecb512995c feat(civitai): expand image metadata detection criteria, see #700
Add additional CivitAI image metadata fields to detection logic including generation parameters (prompt, steps, sampler, etc.) and model information. Also improve LoRA hash detection by checking both main metadata and nested meta objects. This ensures more comprehensive identification of CivitAI image metadata across different response formats.
2025-11-27 10:28:04 +08:00
Will Miao
f8b9fa9b20 fix(civitai): improve metadata parsing for nested structures, see #700
- Refactor metadata detection to handle nested "meta" objects
- Add support for lowercase "lora:" hash keys
- Extract metadata from nested "meta" field when present
- Update tests to verify nested metadata parsing
- Handle case-insensitive LORA hash detection

The changes ensure proper parsing of Civitai image metadata that may be wrapped in nested structures, improving compatibility with different API response formats.
2025-11-26 13:46:08 +08:00
21 changed files with 1040 additions and 107 deletions

View File

@@ -196,9 +196,11 @@ class MetadataRegistry:
node_metadata[category] = {}
node_metadata[category][node_id] = current_metadata[category][node_id]
# Save to cache if we have any metadata for this node
# Save new metadata or clear stale cache entries when metadata is empty
if any(node_metadata.values()):
self.node_cache[cache_key] = node_metadata
else:
self.node_cache.pop(cache_key, None)
def clear_unused_cache(self):
"""Clean up node_cache entries that are no longer in use"""

View File

@@ -1,6 +1,7 @@
"""Parser for Automatic1111 metadata format."""
import re
import os
import json
import logging
from typing import Dict, Any
@@ -22,6 +23,7 @@ class AutomaticMetadataParser(RecipeMetadataParser):
CIVITAI_METADATA_REGEX = r', Civitai metadata:\s*(\{.*?\})'
EXTRANETS_REGEX = r'<(lora|hypernet):([^:]+):(-?[0-9.]+)>'
MODEL_HASH_PATTERN = r'Model hash: ([a-zA-Z0-9]+)'
MODEL_NAME_PATTERN = r'Model: ([^,]+)'
VAE_HASH_PATTERN = r'VAE hash: ([a-zA-Z0-9]+)'
def is_metadata_matching(self, user_comment: str) -> bool:
@@ -115,6 +117,12 @@ class AutomaticMetadataParser(RecipeMetadataParser):
except json.JSONDecodeError:
logger.error("Error parsing hashes JSON")
# Pick up model hash from parsed hashes if available
if "hashes" in metadata and not metadata.get("model_hash"):
model_hash_from_hashes = metadata["hashes"].get("model")
if model_hash_from_hashes:
metadata["model_hash"] = model_hash_from_hashes
# Extract Lora hashes in alternative format
lora_hashes_match = re.search(self.LORA_HASHES_REGEX, params_section)
if not hashes_match and lora_hashes_match:
@@ -137,6 +145,17 @@ class AutomaticMetadataParser(RecipeMetadataParser):
params_section = params_section.replace(lora_hashes_match.group(0), '')
except Exception as e:
logger.error(f"Error parsing Lora hashes: {e}")
# Extract checkpoint model hash/name when provided outside Civitai resources
model_hash_match = re.search(self.MODEL_HASH_PATTERN, params_section)
if model_hash_match:
metadata["model_hash"] = model_hash_match.group(1).strip()
params_section = params_section.replace(model_hash_match.group(0), '')
model_name_match = re.search(self.MODEL_NAME_PATTERN, params_section)
if model_name_match:
metadata["model_name"] = model_name_match.group(1).strip()
params_section = params_section.replace(model_name_match.group(0), '')
# Extract basic parameters
param_pattern = r'([A-Za-z\s]+): ([^,]+)'
@@ -178,9 +197,10 @@ class AutomaticMetadataParser(RecipeMetadataParser):
metadata["gen_params"] = gen_params
# Extract LoRA information
# Extract LoRA and checkpoint information
loras = []
base_model_counts = {}
checkpoint = None
# First use Civitai resources if available (more reliable source)
if metadata.get("civitai_resources"):
@@ -202,6 +222,50 @@ class AutomaticMetadataParser(RecipeMetadataParser):
resource["modelVersionId"] = air_modelVersionId
# --- End added ---
if resource.get("type") == "checkpoint" and resource.get("modelVersionId"):
version_id = resource.get("modelVersionId")
version_id_str = str(version_id)
checkpoint_entry = {
'id': version_id,
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown Checkpoint"),
'version': resource.get("modelVersionName", resource.get("versionName", "")),
'type': resource.get("type", "checkpoint"),
'existsLocally': False,
'localPath': None,
'file_name': resource.get("modelName", ""),
'hash': resource.get("hash", "") or "",
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider:
try:
civitai_info = await metadata_provider.get_model_version_info(version_id_str)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(
"Error fetching Civitai info for checkpoint version %s: %s",
version_id,
e,
)
# Prefer the first checkpoint found
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
if checkpoint is None:
checkpoint = checkpoint_entry
continue
if resource.get("type") in ["lora", "lycoris", "hypernet"] and resource.get("modelVersionId"):
# Initialize lora entry
lora_entry = {
@@ -237,6 +301,52 @@ class AutomaticMetadataParser(RecipeMetadataParser):
loras.append(lora_entry)
# Fallback checkpoint parsing from generic "Model" and "Model hash" fields
if checkpoint is None:
model_hash = metadata.get("model_hash")
if not model_hash and metadata.get("hashes"):
model_hash = metadata["hashes"].get("model")
model_name = metadata.get("model_name")
file_name = ""
if model_name:
cleaned_name = re.split(r"[\\\\/]", model_name)[-1]
file_name = os.path.splitext(cleaned_name)[0]
if model_hash or model_name:
checkpoint_entry = {
'id': 0,
'modelId': 0,
'name': model_name or "Unknown Checkpoint",
'version': '',
'type': 'checkpoint',
'hash': model_hash or "",
'existsLocally': False,
'localPath': None,
'file_name': file_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(model_hash)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint hash {model_hash}: {e}")
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
checkpoint = checkpoint_entry
# If no LoRAs from Civitai resources or to supplement, extract from metadata["hashes"]
if not loras or len(loras) == 0:
# Extract lora weights from extranet tags in prompt (for later use)
@@ -300,7 +410,9 @@ class AutomaticMetadataParser(RecipeMetadataParser):
# Try to get base model from resources or make educated guess
base_model = None
if base_model_counts:
if checkpoint and checkpoint.get("baseModel"):
base_model = checkpoint.get("baseModel")
elif base_model_counts:
# Use the most common base model from the loras
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
@@ -317,6 +429,10 @@ class AutomaticMetadataParser(RecipeMetadataParser):
'gen_params': filtered_gen_params,
'from_automatic_metadata': True
}
if checkpoint:
result['checkpoint'] = checkpoint
result['model'] = checkpoint
return result

View File

@@ -23,13 +23,48 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
"""
if not metadata or not isinstance(metadata, dict):
return False
# Check for key markers specific to Civitai image metadata
return any([
"resources" in metadata,
"civitaiResources" in metadata,
"additionalResources" in metadata
])
def has_markers(payload: Dict[str, Any]) -> bool:
# Check for common CivitAI image metadata fields
civitai_image_fields = (
"resources",
"civitaiResources",
"additionalResources",
"hashes",
"prompt",
"negativePrompt",
"steps",
"sampler",
"cfgScale",
"seed",
"width",
"height",
"Model",
"Model hash"
)
return any(key in payload for key in civitai_image_fields)
# Check the main metadata object
if has_markers(metadata):
return True
# Check for LoRA hash patterns
hashes = metadata.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
return True
# Check nested meta object (common in CivitAI image responses)
nested_meta = metadata.get("meta")
if isinstance(nested_meta, dict):
if has_markers(nested_meta):
return True
# Also check for LoRA hash patterns in nested meta
hashes = nested_meta.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
return True
return False
async def parse_metadata(self, metadata, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai image format
@@ -45,6 +80,26 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
# Civitai image responses may wrap the actual metadata inside a "meta" key
if (
isinstance(metadata, dict)
and "meta" in metadata
and isinstance(metadata["meta"], dict)
):
inner_meta = metadata["meta"]
if any(
key in inner_meta
for key in (
"resources",
"civitaiResources",
"additionalResources",
"hashes",
"prompt",
"negativePrompt",
)
):
metadata = inner_meta
# Initialize result structure
result = {
@@ -62,8 +117,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
lora_hashes = {}
if "hashes" in metadata and isinstance(metadata["hashes"], dict):
for key, hash_value in metadata["hashes"].items():
if key.startswith("LORA:"):
lora_name = key.replace("LORA:", "")
key_str = str(key)
if key_str.lower().startswith("lora:"):
lora_name = key_str.split(":", 1)[1]
lora_hashes[lora_name] = hash_value
# Extract prompt and negative prompt

View File

@@ -1,5 +1,6 @@
"""Parser for meta format (Lora_N Model hash) metadata."""
import os
import re
import logging
from typing import Dict, Any
@@ -145,14 +146,53 @@ class MetaFormatParser(RecipeMetadataParser):
loras.append(lora_entry)
# Extract model information
model = None
if 'model' in metadata:
model = metadata['model']
# Extract checkpoint information from generic Model/Model hash fields
checkpoint = None
model_hash = metadata.get("model_hash")
model_name = metadata.get("model")
if model_hash or model_name:
cleaned_name = None
if model_name:
cleaned_name = re.split(r"[\\\\/]", model_name)[-1]
cleaned_name = os.path.splitext(cleaned_name)[0]
checkpoint_entry = {
'id': 0,
'modelId': 0,
'name': model_name or "Unknown Checkpoint",
'version': '',
'type': 'checkpoint',
'hash': model_hash or "",
'existsLocally': False,
'localPath': None,
'file_name': cleaned_name or (model_name or ""),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(model_hash)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint hash {model_hash}: {e}")
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
checkpoint = checkpoint_entry
# Set base_model to the most common one from civitai_info
base_model = None
if base_model_counts:
# Set base_model to the most common one from civitai_info or checkpoint
base_model = checkpoint["baseModel"] if checkpoint and checkpoint.get("baseModel") else None
if not base_model and base_model_counts:
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Extract generation parameters for recipe metadata
@@ -170,7 +210,8 @@ class MetaFormatParser(RecipeMetadataParser):
'loras': loras,
'gen_params': gen_params,
'raw_metadata': metadata,
'from_meta_format': True
'from_meta_format': True,
**({'checkpoint': checkpoint, 'model': checkpoint} if checkpoint else {})
}
except Exception as e:

View File

@@ -3,7 +3,7 @@
import re
import json
import logging
from typing import Dict, Any
from typing import Dict, Any, Optional
from ...config import config
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
@@ -16,6 +16,28 @@ class RecipeFormatParser(RecipeMetadataParser):
# Regular expression pattern for extracting recipe metadata
METADATA_MARKER = r'Recipe metadata: (\{.*\})'
async def _get_lora_from_version_index(self, recipe_scanner, model_version_id: Any) -> Optional[Dict[str, Any]]:
"""Return a cached LoRA entry by modelVersionId if available."""
if not recipe_scanner or not getattr(recipe_scanner, "_lora_scanner", None):
return None
try:
normalized_id = int(model_version_id)
except (TypeError, ValueError):
return None
try:
cache = await recipe_scanner._lora_scanner.get_cached_data()
except Exception as exc: # pragma: no cover - defensive logging
logger.debug("Unable to load lora cache for version lookup: %s", exc)
return None
if not cache or not getattr(cache, "version_index", None):
return None
return cache.version_index.get(normalized_id)
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
@@ -53,49 +75,110 @@ class RecipeFormatParser(RecipeMetadataParser):
'type': 'lora',
'weight': lora.get('strength', 1.0),
'file_name': lora.get('file_name', ''),
'hash': lora.get('hash', '')
'hash': lora.get('hash', ''),
'existsLocally': False,
'inLibrary': False,
'localPath': None,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'size': 0
}
# Check if this LoRA exists locally by SHA256 hash
if lora.get('hash') and recipe_scanner:
if recipe_scanner:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_hash(lora['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'].lower() == lora['hash'].lower()), None)
if lora_item:
if lora.get('hash'):
exists_locally = lora_scanner.has_hash(lora['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'].lower() == lora['hash'].lower()), None)
if lora_item:
lora_entry['existsLocally'] = True
lora_entry['inLibrary'] = True
lora_entry['localPath'] = lora_item['file_path']
lora_entry['file_name'] = lora_item['file_name']
lora_entry['size'] = lora_item['size']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
else:
lora_entry['existsLocally'] = False
lora_entry['inLibrary'] = False
lora_entry['localPath'] = None
# If we still don't have a local match, try matching by modelVersionId
if not lora_entry['existsLocally'] and lora.get('modelVersionId') is not None:
cached_lora = await self._get_lora_from_version_index(recipe_scanner, lora.get('modelVersionId'))
if cached_lora:
lora_entry['existsLocally'] = True
lora_entry['localPath'] = lora_item['file_path']
lora_entry['file_name'] = lora_item['file_name']
lora_entry['size'] = lora_item['size']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
else:
lora_entry['existsLocally'] = False
lora_entry['localPath'] = None
# Try to get additional info from Civitai if we have a model version ID
if lora.get('modelVersionId') and metadata_provider:
try:
civitai_info_tuple = await metadata_provider.get_model_version_info(lora['modelVersionId'])
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner,
None, # No need to track base model counts
lora['hash']
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
lora_entry['inLibrary'] = True
lora_entry['localPath'] = cached_lora.get('file_path')
lora_entry['file_name'] = cached_lora.get('file_name') or lora_entry['file_name']
lora_entry['size'] = cached_lora.get('size', lora_entry['size'])
if cached_lora.get('sha256'):
lora_entry['hash'] = cached_lora['sha256']
preview_url = cached_lora.get('preview_url')
if preview_url:
lora_entry['thumbnailUrl'] = config.get_preview_static_url(preview_url)
# Try to get additional info from Civitai if we have a model version ID and still missing locally
if not lora_entry['existsLocally'] and lora.get('modelVersionId') and metadata_provider:
try:
civitai_info_tuple = await metadata_provider.get_model_version_info(lora['modelVersionId'])
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner,
None, # No need to track base model counts
lora_entry.get('hash', '')
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
loras.append(lora_entry)
logger.info(f"Found {len(loras)} loras in recipe metadata")
# Process checkpoint information if present
checkpoint = None
checkpoint_data = recipe_metadata.get('checkpoint') or {}
if isinstance(checkpoint_data, dict) and checkpoint_data:
version_id = checkpoint_data.get('modelVersionId') or checkpoint_data.get('id')
checkpoint_entry = {
'id': version_id or 0,
'modelId': checkpoint_data.get('modelId', 0),
'name': checkpoint_data.get('name', 'Unknown Checkpoint'),
'version': checkpoint_data.get('version', ''),
'type': checkpoint_data.get('type', 'checkpoint'),
'hash': checkpoint_data.get('hash', ''),
'existsLocally': False,
'localPath': None,
'file_name': checkpoint_data.get('file_name', ''),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider:
try:
civitai_info = None
if version_id:
civitai_info = await metadata_provider.get_model_version_info(str(version_id))
elif checkpoint_entry.get('hash'):
civitai_info = await metadata_provider.get_model_by_hash(checkpoint_entry['hash'])
if civitai_info:
checkpoint_entry = await self.populate_checkpoint_from_civitai(checkpoint_entry, civitai_info)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint in recipe metadata: {e}")
checkpoint = checkpoint_entry
# Filter gen_params to only include recognized keys
filtered_gen_params = {}
@@ -105,12 +188,13 @@ class RecipeFormatParser(RecipeMetadataParser):
filtered_gen_params[key] = value
return {
'base_model': recipe_metadata.get('base_model', ''),
'base_model': checkpoint['baseModel'] if checkpoint and checkpoint.get('baseModel') else recipe_metadata.get('base_model', ''),
'loras': loras,
'gen_params': filtered_gen_params,
'tags': recipe_metadata.get('tags', []),
'title': recipe_metadata.get('title', ''),
'from_recipe_metadata': True
'from_recipe_metadata': True,
**({'checkpoint': checkpoint, 'model': checkpoint} if checkpoint else {})
}
except Exception as e:

View File

@@ -107,6 +107,12 @@ class RecipeAnalysisService:
raise RecipeDownloadError("No image URL found in Civitai response")
await self._download_image(image_url, temp_path)
metadata = image_info.get("meta") if "meta" in image_info else None
if (
isinstance(metadata, dict)
and "meta" in metadata
and isinstance(metadata["meta"], dict)
):
metadata = metadata["meta"]
else:
await self._download_image(url, temp_path)

View File

@@ -140,6 +140,28 @@ class ExifUtils:
if metadata:
# Remove any existing recipe metadata
metadata = ExifUtils.remove_recipe_metadata(metadata)
# Prepare checkpoint data
checkpoint_data = recipe_data.get("checkpoint") or {}
simplified_checkpoint = None
if isinstance(checkpoint_data, dict) and checkpoint_data:
simplified_checkpoint = {
"type": checkpoint_data.get("type", "checkpoint"),
"modelId": checkpoint_data.get("modelId", 0),
"modelVersionId": checkpoint_data.get("modelVersionId")
or checkpoint_data.get("id", 0),
"modelName": checkpoint_data.get(
"modelName", checkpoint_data.get("name", "")
),
"modelVersionName": checkpoint_data.get(
"modelVersionName", checkpoint_data.get("version", "")
),
"hash": checkpoint_data.get("hash", "").lower()
if checkpoint_data.get("hash")
else "",
"file_name": checkpoint_data.get("file_name", ""),
"baseModel": checkpoint_data.get("baseModel", ""),
}
# Prepare simplified loras data
simplified_loras = []
@@ -160,7 +182,8 @@ class ExifUtils:
'base_model': recipe_data.get('base_model', ''),
'loras': simplified_loras,
'gen_params': recipe_data.get('gen_params', {}),
'tags': recipe_data.get('tags', [])
'tags': recipe_data.get('tags', []),
**({'checkpoint': simplified_checkpoint} if simplified_checkpoint else {})
}
# Convert to JSON string
@@ -359,4 +382,4 @@ class ExifUtils:
return f.read(), os.path.splitext(image_data)[1]
except Exception:
return image_data, '.jpg' # Last resort fallback
return image_data, '.jpg'
return image_data, '.jpg'

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-lora-manager"
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
version = "0.9.10"
version = "0.9.11"
license = {file = "LICENSE"}
dependencies = [
"aiohttp",

View File

@@ -38,52 +38,57 @@ function updateModalFilePathReferences(newFilePath) {
}
const modalElement = document.getElementById('modelModal');
if (modalElement) {
modalElement.dataset.filePath = newFilePath;
modalElement.setAttribute('data-file-path', newFilePath);
if (!modalElement) {
return;
}
const modelNameContent = document.querySelector('.model-name-content');
modalElement.dataset.filePath = newFilePath;
modalElement.setAttribute('data-file-path', newFilePath);
const scopedQuery = (selector) => modalElement.querySelector(selector);
const scopedQueryAll = (selector) => modalElement.querySelectorAll(selector);
const modelNameContent = scopedQuery('.model-name-content');
if (modelNameContent && modelNameContent.dataset) {
modelNameContent.dataset.filePath = newFilePath;
modelNameContent.setAttribute('data-file-path', newFilePath);
}
const baseModelContent = document.querySelector('.base-model-content');
const baseModelContent = scopedQuery('.base-model-content');
if (baseModelContent && baseModelContent.dataset) {
baseModelContent.dataset.filePath = newFilePath;
baseModelContent.setAttribute('data-file-path', newFilePath);
}
const fileNameContent = document.querySelector('.file-name-content');
const fileNameContent = scopedQuery('.file-name-content');
if (fileNameContent && fileNameContent.dataset) {
fileNameContent.dataset.filePath = newFilePath;
fileNameContent.setAttribute('data-file-path', newFilePath);
}
const editTagsBtn = document.querySelector('.edit-tags-btn');
const editTagsBtn = scopedQuery('.edit-tags-btn');
if (editTagsBtn) {
editTagsBtn.dataset.filePath = newFilePath;
editTagsBtn.setAttribute('data-file-path', newFilePath);
}
const editTriggerWordsBtn = document.querySelector('.edit-trigger-words-btn');
const editTriggerWordsBtn = scopedQuery('.edit-trigger-words-btn');
if (editTriggerWordsBtn) {
editTriggerWordsBtn.dataset.filePath = newFilePath;
editTriggerWordsBtn.setAttribute('data-file-path', newFilePath);
}
document.querySelectorAll('[data-action="open-file-location"]').forEach((el) => {
scopedQueryAll('[data-action="open-file-location"]').forEach((el) => {
el.dataset.filepath = newFilePath;
el.setAttribute('data-filepath', newFilePath);
});
document.querySelectorAll('[data-file-path]').forEach((el) => {
scopedQueryAll('[data-file-path]').forEach((el) => {
el.dataset.filePath = newFilePath;
el.setAttribute('data-file-path', newFilePath);
});
document.querySelectorAll('[data-filepath]').forEach((el) => {
scopedQueryAll('[data-filepath]').forEach((el) => {
el.dataset.filepath = newFilePath;
el.setAttribute('data-filepath', newFilePath);
});

View File

@@ -454,6 +454,8 @@ export async function showModelModal(model, modelType) {
</div>
`;
let showcaseCleanup;
const onCloseCallback = function() {
// Clean up all handlers when modal closes for LoRA
const modalElement = document.getElementById(modalId);
@@ -461,6 +463,10 @@ export async function showModelModal(model, modelType) {
modalElement.removeEventListener('click', modalElement._clickHandler);
delete modalElement._clickHandler;
}
if (showcaseCleanup) {
showcaseCleanup();
showcaseCleanup = null;
}
};
modalManager.showModal(modalId, content, null, onCloseCallback);
@@ -475,7 +481,7 @@ export async function showModelModal(model, modelType) {
currentVersionId: civitaiVersionId,
});
setupEditableFields(modelWithFullData.file_path, modelType);
setupShowcaseScroll(modalId);
showcaseCleanup = setupShowcaseScroll(modalId);
setupTabSwitching({
onTabChange: async (tab) => {
if (tab === 'versions') {

View File

@@ -15,6 +15,18 @@ import {
import { generateMetadataPanel } from './MetadataPanel.js';
import { generateImageWrapper, generateVideoWrapper } from './MediaRenderers.js';
export const showcaseListenerMetrics = {
wheelListeners: 0,
mutationObservers: 0,
backToTopHandlers: 0,
};
export function resetShowcaseListenerMetrics() {
showcaseListenerMetrics.wheelListeners = 0;
showcaseListenerMetrics.mutationObservers = 0;
showcaseListenerMetrics.backToTopHandlers = 0;
}
/**
* Load example images asynchronously
* @param {Array} images - Array of image objects (both regular and custom)
@@ -524,8 +536,8 @@ export function scrollToTop(button) {
* @param {string} modalId - ID of the modal element
*/
export function setupShowcaseScroll(modalId) {
// Listen for wheel events
document.addEventListener('wheel', (event) => {
const wheelOptions = { passive: false };
const wheelHandler = (event) => {
const modalContent = document.querySelector(`#${modalId} .modal-content`);
if (!modalContent) return;
@@ -543,7 +555,9 @@ export function setupShowcaseScroll(modalId) {
event.preventDefault();
}
}
}, { passive: false });
};
document.addEventListener('wheel', wheelHandler, wheelOptions);
showcaseListenerMetrics.wheelListeners += 1;
// Use MutationObserver to set up back-to-top button when modal content is added
const observer = new MutationObserver((mutations) => {
@@ -558,12 +572,28 @@ export function setupShowcaseScroll(modalId) {
});
observer.observe(document.body, { childList: true, subtree: true });
showcaseListenerMetrics.mutationObservers += 1;
// Try to set up the button immediately in case the modal is already open
const modalContent = document.querySelector(`#${modalId} .modal-content`);
if (modalContent) {
setupBackToTopButton(modalContent);
}
let cleanedUp = false;
return () => {
if (cleanedUp) {
return;
}
cleanedUp = true;
document.removeEventListener('wheel', wheelHandler, wheelOptions);
showcaseListenerMetrics.wheelListeners -= 1;
observer.disconnect();
showcaseListenerMetrics.mutationObservers -= 1;
const modalContent = document.querySelector(`#${modalId} .modal-content`);
teardownBackToTopButton(modalContent);
};
}
/**
@@ -571,11 +601,9 @@ export function setupShowcaseScroll(modalId) {
* @param {HTMLElement} modalContent - Modal content element
*/
function setupBackToTopButton(modalContent) {
// Remove any existing scroll listeners to avoid duplicates
modalContent.onscroll = null;
// Add new scroll listener
modalContent.addEventListener('scroll', () => {
teardownBackToTopButton(modalContent);
const handler = () => {
const backToTopBtn = modalContent.querySelector('.back-to-top');
if (backToTopBtn) {
if (modalContent.scrollTop > 300) {
@@ -584,8 +612,23 @@ function setupBackToTopButton(modalContent) {
backToTopBtn.classList.remove('visible');
}
}
});
// Trigger a scroll event to check initial position
modalContent.dispatchEvent(new Event('scroll'));
}
};
modalContent._backToTopScrollHandler = handler;
modalContent.addEventListener('scroll', handler);
showcaseListenerMetrics.backToTopHandlers += 1;
handler();
}
function teardownBackToTopButton(modalContent) {
if (!modalContent) {
return;
}
const existingHandler = modalContent._backToTopScrollHandler;
if (existingHandler) {
modalContent.removeEventListener('scroll', existingHandler);
delete modalContent._backToTopScrollHandler;
showcaseListenerMetrics.backToTopHandlers -= 1;
}
}

View File

@@ -83,6 +83,7 @@ export class ImageProcessor {
}
this.importManager.recipeData = recipeData;
this._ensureCheckpointMetadata();
// Check if we have an error message
if (this.importManager.recipeData.error) {
@@ -134,6 +135,7 @@ export class ImageProcessor {
}
this.importManager.recipeData = recipeData;
this._ensureCheckpointMetadata();
// Check if we have an error message
if (this.importManager.recipeData.error) {
@@ -188,6 +190,7 @@ export class ImageProcessor {
}
this.importManager.recipeData = recipeData;
this._ensureCheckpointMetadata();
// Check if we have an error message
if (this.importManager.recipeData.error) {
@@ -215,4 +218,12 @@ export class ImageProcessor {
this.importManager.loadingManager.hide();
}
}
_ensureCheckpointMetadata() {
if (!this.importManager.recipeData) return;
if (this.importManager.recipeData.model && !this.importManager.recipeData.checkpoint) {
this.importManager.recipeData.checkpoint = this.importManager.recipeData.model;
}
}
}

View File

@@ -26,6 +26,7 @@ export const BASE_MODELS = {
FLUX_1_S: "Flux.1 S",
FLUX_1_KREA: "Flux.1 Krea",
FLUX_1_KONTEXT: "Flux.1 Kontext",
FLUX_2_D: "Flux.2 D",
AURAFLOW: "AuraFlow",
CHROMA: "Chroma",
PIXART_A: "PixArt a",
@@ -38,6 +39,7 @@ export const BASE_MODELS = {
PONY: "Pony",
HIDREAM: "HiDream",
QWEN: "Qwen",
ZIMAGE_TURBO: "ZImageTurbo",
// Video models
SVD: "SVD",
@@ -89,6 +91,7 @@ export const BASE_MODEL_ABBREVIATIONS = {
[BASE_MODELS.FLUX_1_S]: 'F1S',
[BASE_MODELS.FLUX_1_KREA]: 'F1KR',
[BASE_MODELS.FLUX_1_KONTEXT]: 'F1KX',
[BASE_MODELS.FLUX_2_D]: 'F2D',
// Other diffusion models
[BASE_MODELS.AURAFLOW]: 'AF',
@@ -103,6 +106,7 @@ export const BASE_MODEL_ABBREVIATIONS = {
[BASE_MODELS.PONY]: 'PONY',
[BASE_MODELS.HIDREAM]: 'HID',
[BASE_MODELS.QWEN]: 'QWEN',
[BASE_MODELS.ZIMAGE_TURBO]: 'ZIT',
// Video models
[BASE_MODELS.SVD]: 'SVD',
@@ -302,10 +306,10 @@ export const BASE_MODEL_CATEGORIES = {
BASE_MODELS.WAN_VIDEO_2_2_TI2V_5B, BASE_MODELS.WAN_VIDEO_2_2_T2V_A14B,
BASE_MODELS.WAN_VIDEO_2_2_I2V_A14B
],
'Flux Models': [BASE_MODELS.FLUX_1_D, BASE_MODELS.FLUX_1_S, BASE_MODELS.FLUX_1_KONTEXT, BASE_MODELS.FLUX_1_KREA],
'Flux Models': [BASE_MODELS.FLUX_1_D, BASE_MODELS.FLUX_1_S, BASE_MODELS.FLUX_1_KONTEXT, BASE_MODELS.FLUX_1_KREA, BASE_MODELS.FLUX_2_D],
'Other Models': [
BASE_MODELS.ILLUSTRIOUS, BASE_MODELS.PONY, BASE_MODELS.HIDREAM,
BASE_MODELS.QWEN, BASE_MODELS.AURAFLOW, BASE_MODELS.CHROMA,
BASE_MODELS.QWEN, BASE_MODELS.AURAFLOW, BASE_MODELS.CHROMA, BASE_MODELS.ZIMAGE_TURBO,
BASE_MODELS.PIXART_A, BASE_MODELS.PIXART_E, BASE_MODELS.HUNYUAN_1,
BASE_MODELS.LUMINA, BASE_MODELS.KOLORS, BASE_MODELS.NOOBAI,
BASE_MODELS.UNKNOWN

View File

@@ -114,26 +114,27 @@ describe('Model metadata interactions keep file path in sync', () => {
});
document.body.innerHTML = `
<div id="modelModal" data-file-path="models/Qwen.safetensors"></div>
<div class="model-name-header">
<h2 class="model-name-content" data-file-path="models/Qwen.safetensors">Qwen</h2>
<button class="edit-model-name-btn"></button>
<div id="modelModal" data-file-path="models/Qwen.safetensors">
<div class="model-name-header">
<h2 class="model-name-content" data-file-path="models/Qwen.safetensors">Qwen</h2>
<button class="edit-model-name-btn"></button>
</div>
<div class="base-model-display">
<span class="base-model-content" data-file-path="models/Qwen.safetensors">SDXL</span>
<button class="edit-base-model-btn"></button>
</div>
<div class="file-name-wrapper">
<span class="file-name-content" data-file-path="models/Qwen.safetensors">Qwen</span>
<button class="edit-file-name-btn"></button>
</div>
<div class="model-tags-container">
<div class="model-tags-compact"></div>
<div class="tooltip-content"></div>
<button class="edit-tags-btn" data-file-path="models/Qwen.safetensors"></button>
</div>
<button class="edit-trigger-words-btn" data-file-path="models/Qwen.safetensors"></button>
<div data-action="open-file-location" data-filepath="models/Qwen.safetensors"></div>
</div>
<div class="base-model-display">
<span class="base-model-content" data-file-path="models/Qwen.safetensors">SDXL</span>
<button class="edit-base-model-btn"></button>
</div>
<div class="file-name-wrapper">
<span class="file-name-content" data-file-path="models/Qwen.safetensors">Qwen</span>
<button class="edit-file-name-btn"></button>
</div>
<div class="model-tags-container">
<div class="model-tags-compact"></div>
<div class="tooltip-content"></div>
<button class="edit-tags-btn" data-file-path="models/Qwen.safetensors"></button>
</div>
<button class="edit-trigger-words-btn" data-file-path="models/Qwen.safetensors"></button>
<div data-action="open-file-location" data-filepath="models/Qwen.safetensors"></div>
`;
const { setupFileNameEditing } = await import(METADATA_MODULE);

View File

@@ -0,0 +1,72 @@
import { describe, it, beforeEach, afterEach, expect } from 'vitest';
const { SHOWCASE_MODULE } = vi.hoisted(() => ({
SHOWCASE_MODULE: new URL('../../../static/js/components/shared/showcase/ShowcaseView.js', import.meta.url).pathname,
}));
describe('Showcase listener metrics', () => {
beforeEach(() => {
document.body.innerHTML = `
<div id="modelModal">
<div class="modal-content">
<div class="showcase-section">
<div class="carousel collapsed">
<div class="scroll-indicator"></div>
</div>
<button class="back-to-top"></button>
</div>
</div>
</div>
`;
});
afterEach(() => {
document.body.innerHTML = '';
});
it('tracks wheel/mutation/back-to-top listeners and resets after cleanup', async () => {
const {
setupShowcaseScroll,
resetShowcaseListenerMetrics,
showcaseListenerMetrics,
} = await import(SHOWCASE_MODULE);
resetShowcaseListenerMetrics();
expect(showcaseListenerMetrics.wheelListeners).toBe(0);
expect(showcaseListenerMetrics.mutationObservers).toBe(0);
expect(showcaseListenerMetrics.backToTopHandlers).toBe(0);
const cleanup = setupShowcaseScroll('modelModal');
expect(showcaseListenerMetrics.wheelListeners).toBe(1);
expect(showcaseListenerMetrics.mutationObservers).toBe(1);
expect(showcaseListenerMetrics.backToTopHandlers).toBe(1);
cleanup();
expect(showcaseListenerMetrics.wheelListeners).toBe(0);
expect(showcaseListenerMetrics.mutationObservers).toBe(0);
expect(showcaseListenerMetrics.backToTopHandlers).toBe(0);
});
it('remains stable after repeated setup/cleanup cycles', async () => {
const {
setupShowcaseScroll,
resetShowcaseListenerMetrics,
showcaseListenerMetrics,
} = await import(SHOWCASE_MODULE);
resetShowcaseListenerMetrics();
const cleanupA = setupShowcaseScroll('modelModal');
cleanupA();
const cleanupB = setupShowcaseScroll('modelModal');
cleanupB();
expect(showcaseListenerMetrics.wheelListeners).toBe(0);
expect(showcaseListenerMetrics.mutationObservers).toBe(0);
expect(showcaseListenerMetrics.backToTopHandlers).toBe(0);
});
});

View File

@@ -124,3 +124,40 @@ def test_metadata_registry_caches_and_rehydrates(populated_registry):
registry.clear_metadata("promptA")
assert "promptA" not in registry.prompt_metadata
def test_lora_manager_cache_updates_when_loras_removed(metadata_registry):
import nodes
class LoraManagerLoader: # type: ignore[too-many-ancestors]
__name__ = "LoraManagerLoader"
nodes.NODE_CLASS_MAPPINGS["LoraManagerLoader"] = LoraManagerLoader
prompt_graph = {
"lora_node": {"class_type": "LoraManagerLoader", "inputs": {}},
}
prompt = SimpleNamespace(original_prompt=prompt_graph)
cache_key = "lora_node:LoraManagerLoader"
metadata_registry.start_collection("prompt1")
metadata_registry.set_current_prompt(prompt)
metadata_registry.record_node_execution(
"lora_node",
"LoraManagerLoader",
{"loras": [[{"name": "foo", "strength": 0.8, "active": True}]]},
None,
)
assert cache_key in metadata_registry.node_cache
metadata_registry.start_collection("prompt2")
metadata_registry.set_current_prompt(prompt)
metadata_registry.record_node_execution("lora_node", "LoraManagerLoader", {"loras": [[]]}, None)
assert cache_key not in metadata_registry.node_cache
metadata_registry.start_collection("prompt3")
metadata_registry.set_current_prompt(prompt)
metadata = metadata_registry.get_metadata("prompt3")
assert "lora_node" not in metadata[LORAS]

View File

@@ -0,0 +1,120 @@
import pytest
from py.recipes.parsers.automatic import AutomaticMetadataParser
@pytest.mark.asyncio
async def test_parse_metadata_extracts_checkpoint_from_civitai_resources(monkeypatch):
checkpoint_info = {
"id": 2442439,
"modelId": 123456,
"model": {"name": "Z Image", "type": "checkpoint"},
"name": "Turbo",
"images": [{"url": "https://image.civitai.com/checkpoints/original=true"}],
"baseModel": "sdxl",
"downloadUrl": "https://civitai.com/api/download/checkpoint",
"files": [
{
"type": "Model",
"primary": True,
"sizeKB": 2048,
"name": "Z_Image_Turbo.safetensors",
"hashes": {"SHA256": "ABC123FF"},
}
],
}
async def fake_metadata_provider():
class Provider:
async def get_model_version_info(self, version_id):
assert version_id == "2442439"
return checkpoint_info, None
return Provider()
monkeypatch.setattr(
"py.recipes.parsers.automatic.get_default_metadata_provider",
fake_metadata_provider,
)
parser = AutomaticMetadataParser()
metadata_text = (
"Negative space, fog, BLACK blue color GRADIENT BACKGROUND, a vintage car in the middle, "
"FOG, and a silhouetted figure near the car, in the style of the Blade Runner movie "
"Negative prompt: Steps: 23, Sampler: Undefined, CFG scale: 3.5, Seed: 1760020955, "
"Size: 832x1216, Clip skip: 2, Created Date: 2025-11-28T09:18:43.5269343Z, "
'Civitai resources: [{"type":"checkpoint","modelVersionId":2442439,"modelName":"Z Image","modelVersionName":"Turbo"}], '
"Civitai metadata: {}"
)
result = await parser.parse_metadata(metadata_text)
checkpoint = result.get("checkpoint")
assert checkpoint is not None
assert checkpoint["name"] == "Z Image"
assert checkpoint["version"] == "Turbo"
assert checkpoint["type"] == "checkpoint"
assert checkpoint["modelId"] == 123456
assert checkpoint["hash"] == "abc123ff"
assert checkpoint["file_name"] == "Z_Image_Turbo"
assert checkpoint["thumbnailUrl"].endswith("width=450,optimized=true")
assert result["model"] == checkpoint
assert result["base_model"] == "sdxl"
assert result["loras"] == []
@pytest.mark.asyncio
async def test_parse_metadata_extracts_checkpoint_from_model_hash(monkeypatch):
checkpoint_info = {
"id": 98765,
"modelId": 654321,
"model": {"name": "Flux Illustrious", "type": "checkpoint"},
"name": "v1",
"images": [{"url": "https://image.civitai.com/checkpoints/original=true"}],
"baseModel": "flux",
"downloadUrl": "https://civitai.com/api/download/checkpoint",
"files": [
{
"type": "Model",
"primary": True,
"sizeKB": 1024,
"name": "FluxIllustrious_v1.safetensors",
"hashes": {"SHA256": "C3688EE04C"},
}
],
}
async def fake_metadata_provider():
class Provider:
async def get_model_by_hash(self, model_hash):
assert model_hash == "c3688ee04c"
return checkpoint_info, None
return Provider()
monkeypatch.setattr(
"py.recipes.parsers.automatic.get_default_metadata_provider",
fake_metadata_provider,
)
parser = AutomaticMetadataParser()
metadata_text = (
"A cyberpunk portrait with neon highlights.\n"
"Negative prompt: low quality\n"
"Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 123456, Size: 832x1216, "
"Model hash: c3688ee04c, Model: models/waiNSFWIllustrious_v110.safetensors"
)
result = await parser.parse_metadata(metadata_text)
checkpoint = result.get("checkpoint")
assert checkpoint is not None
assert checkpoint["hash"] == "c3688ee04c"
assert checkpoint["name"] == "Flux Illustrious"
assert checkpoint["version"] == "v1"
assert checkpoint["file_name"] == "FluxIllustrious_v1"
assert result["model"] == checkpoint
assert result["base_model"] == "flux"
assert result["loras"] == []

View File

@@ -60,6 +60,46 @@ async def test_parse_metadata_creates_loras_from_hashes(monkeypatch):
}
@pytest.mark.asyncio
async def test_parse_metadata_handles_nested_meta_and_lowercase_hashes(monkeypatch):
async def fake_metadata_provider():
return None
monkeypatch.setattr(
"py.recipes.parsers.civitai_image.get_default_metadata_provider",
fake_metadata_provider,
)
parser = CivitaiApiMetadataParser()
metadata = {
"id": 106706587,
"meta": {
"prompt": "An enigmatic silhouette",
"hashes": {
"model": "ee75fd24a4",
"lora:mj": "de49e1e98c",
"LORA:Another_Earth_2": "dc11b64a8b",
},
"resources": [
{
"hash": "ee75fd24a4",
"name": "stoiqoNewrealityFLUXSD35_f1DAlphaTwo",
"type": "model",
}
],
},
}
assert parser.is_metadata_matching(metadata)
result = await parser.parse_metadata(metadata)
assert result["gen_params"]["prompt"] == "An enigmatic silhouette"
assert {l["name"] for l in result["loras"]} == {"mj", "Another_Earth_2"}
assert {l["hash"] for l in result["loras"]} == {"de49e1e98c", "dc11b64a8b"}
@pytest.mark.asyncio
async def test_parse_metadata_populates_checkpoint_and_rewrites_thumbnails(monkeypatch):
checkpoint_info = {

View File

@@ -0,0 +1,61 @@
import pytest
from py.recipes.parsers.meta_format import MetaFormatParser
@pytest.mark.asyncio
async def test_meta_format_parser_extracts_checkpoint_from_model_hash(monkeypatch):
checkpoint_info = {
"id": 222333,
"modelId": 999888,
"model": {"name": "Fluxmania V5P", "type": "checkpoint"},
"name": "v5p",
"images": [{"url": "https://image.civitai.com/checkpoints/original=true"}],
"baseModel": "flux",
"downloadUrl": "https://civitai.com/api/download/checkpoint",
"files": [
{
"type": "Model",
"primary": True,
"sizeKB": 1024,
"name": "Fluxmania_V5P.safetensors",
"hashes": {"SHA256": "8AE0583B06"},
}
],
}
async def fake_metadata_provider():
class Provider:
async def get_model_by_hash(self, model_hash):
assert model_hash == "8ae0583b06"
return checkpoint_info, None
return Provider()
monkeypatch.setattr(
"py.recipes.parsers.meta_format.get_default_metadata_provider",
fake_metadata_provider,
)
parser = MetaFormatParser()
metadata_text = (
"Shimmering metal forms\n"
"Negative prompt: flat color\n"
"Steps: 25, Sampler: dpmpp_2m_sgm_uniform, Seed: 471889513588087, "
"Model: Fluxmania V5P.safetensors, Model hash: 8ae0583b06, VAE: ae.sft, "
"Lora_0 Model name: ArtVador I.safetensors, Lora_0 Model hash: 08f7133a58, "
"Lora_0 Strength model: 0.65, Lora_0 Strength clip: 0.65"
)
result = await parser.parse_metadata(metadata_text)
checkpoint = result.get("checkpoint")
assert checkpoint is not None
assert checkpoint["hash"] == "8ae0583b06"
assert checkpoint["name"] == "Fluxmania V5P"
assert checkpoint["version"] == "v5p"
assert checkpoint["file_name"] == "Fluxmania_V5P"
assert result["model"] == checkpoint
assert result["base_model"] == "flux"
assert len(result["loras"]) == 1

View File

@@ -0,0 +1,144 @@
import json
import pytest
from py.recipes.parsers.recipe_format import RecipeFormatParser
from py.config import config
@pytest.mark.asyncio
async def test_recipe_format_parser_populates_checkpoint(monkeypatch):
checkpoint_info = {
"id": 777111,
"modelId": 333222,
"model": {"name": "Z Image", "type": "checkpoint"},
"name": "Turbo",
"images": [{"url": "https://image.civitai.com/checkpoints/original=true"}],
"baseModel": "sdxl",
"downloadUrl": "https://civitai.com/api/download/checkpoint",
"files": [
{
"type": "Model",
"primary": True,
"sizeKB": 2048,
"name": "Z_Image_Turbo.safetensors",
"hashes": {"SHA256": "ABC123FF"},
}
],
}
async def fake_metadata_provider():
class Provider:
async def get_model_version_info(self, version_id):
assert version_id == "777111"
return checkpoint_info, None
return Provider()
monkeypatch.setattr(
"py.recipes.parsers.recipe_format.get_default_metadata_provider",
fake_metadata_provider,
)
parser = RecipeFormatParser()
recipe_metadata = {
"title": "Z Recipe",
"base_model": "",
"loras": [],
"gen_params": {"steps": 20},
"tags": ["test"],
"checkpoint": {
"modelVersionId": 777111,
"modelId": 333222,
"name": "Z Image",
"version": "Turbo",
},
}
metadata_text = f"Recipe metadata: {json.dumps(recipe_metadata)}"
result = await parser.parse_metadata(metadata_text)
checkpoint = result.get("checkpoint")
assert checkpoint is not None
assert checkpoint["name"] == "Z Image"
assert checkpoint["version"] == "Turbo"
assert checkpoint["hash"] == "abc123ff"
assert checkpoint["file_name"] == "Z_Image_Turbo"
assert result["base_model"] == "sdxl"
assert result["model"] == checkpoint
@pytest.mark.asyncio
async def test_recipe_format_parser_marks_lora_in_library_by_version(monkeypatch):
async def fake_metadata_provider():
class Provider:
async def get_model_version_info(self, version_id):
assert version_id == 1244133
return None, None
return Provider()
monkeypatch.setattr(
"py.recipes.parsers.recipe_format.get_default_metadata_provider",
fake_metadata_provider,
)
cached_entry = {
"file_path": "/loras/moriimee.safetensors",
"file_name": "MoriiMee Gothic Niji | LoRA Style",
"size": 4096,
"sha256": "abc123",
"preview_url": "/previews/moriimee.png",
}
class FakeCache:
def __init__(self, entry):
self.raw_data = [entry]
self.version_index = {1244133: entry}
class FakeLoraScanner:
def __init__(self, entry):
self._cache = FakeCache(entry)
def has_hash(self, sha256):
return False
async def get_cached_data(self):
return self._cache
class FakeRecipeScanner:
def __init__(self, entry):
self._lora_scanner = FakeLoraScanner(entry)
parser = RecipeFormatParser()
recipe_metadata = {
"title": "Semi-realism",
"base_model": "Illustrious",
"loras": [
{
"modelVersionId": 1244133,
"modelName": "MoriiMee Gothic Niji | LoRA Style",
"modelVersionName": "V1 Ilustrious",
"strength": 0.5,
"hash": "",
}
],
"gen_params": {"steps": 29},
"tags": ["woman"],
}
metadata_text = f"Recipe metadata: {json.dumps(recipe_metadata)}"
result = await parser.parse_metadata(
metadata_text, recipe_scanner=FakeRecipeScanner(cached_entry)
)
lora_entry = result["loras"][0]
assert lora_entry["existsLocally"] is True
assert lora_entry["inLibrary"] is True
assert lora_entry["localPath"] == cached_entry["file_path"]
assert lora_entry["file_name"] == cached_entry["file_name"]
assert lora_entry["hash"] == cached_entry["sha256"]
assert lora_entry["size"] == cached_entry["size"]
assert lora_entry["thumbnailUrl"] == config.get_preview_static_url(
cached_entry["preview_url"]
)

View File

@@ -0,0 +1,61 @@
import json
from py.utils.exif_utils import ExifUtils
def test_append_recipe_metadata_includes_checkpoint(monkeypatch, tmp_path):
captured = {}
monkeypatch.setattr(
ExifUtils, "extract_image_metadata", staticmethod(lambda _path: None)
)
def fake_update_image_metadata(image_path, metadata):
captured["path"] = image_path
captured["metadata"] = metadata
return image_path
monkeypatch.setattr(
ExifUtils, "update_image_metadata", staticmethod(fake_update_image_metadata)
)
checkpoint = {
"type": "checkpoint",
"modelId": 827184,
"modelVersionId": 2167369,
"modelName": "WAI-illustrious-SDXL",
"modelVersionName": "v15.0",
"hash": "ABC123",
"file_name": "WAI-illustrious-SDXL",
"baseModel": "Illustrious",
}
recipe_data = {
"title": "Semi-realism",
"base_model": "Illustrious",
"loras": [],
"tags": [],
"checkpoint": checkpoint,
}
image_path = tmp_path / "image.webp"
image_path.write_bytes(b"")
ExifUtils.append_recipe_metadata(str(image_path), recipe_data)
assert captured["path"] == str(image_path)
assert captured["metadata"].startswith("Recipe metadata: ")
payload = json.loads(captured["metadata"].split("Recipe metadata: ", 1)[1])
assert payload["checkpoint"] == {
"type": "checkpoint",
"modelId": 827184,
"modelVersionId": 2167369,
"modelName": "WAI-illustrious-SDXL",
"modelVersionName": "v15.0",
"hash": "abc123",
"file_name": "WAI-illustrious-SDXL",
"baseModel": "Illustrious",
}
assert payload["base_model"] == "Illustrious"