Compare commits

..

18 Commits

Author SHA1 Message Date
Will Miao
85c3e33343 Update version to 0.8.1 and add release notes for new features and improvements
- Bump version from 0.8.0 to 0.8.1 in pyproject.toml.
- Document new features in README.md, including base model correction, LoRA loader flexibility, expanded recipe support, enhanced showcase images, and various UI improvements and bug fixes.
2025-03-28 04:15:54 +08:00
Will Miao
1420ab31a2 Enhance CivitaiClient error handling for unauthorized access
- Updated handling of 401 unauthorized responses to differentiate between API key issues and early access restrictions.
- Improved logging for unauthorized access attempts.
- Refactored condition to check for early access restrictions based on response headers.
- Adjusted logic in DownloadManager to check for early access using a more concise method.
2025-03-28 04:11:08 +08:00
Will Miao
fd1435537f Add ImageSaverMetadataParser for ComfyUI Image Saver plugin metadata handling
- Introduced ImageSaverMetadataParser class to parse metadata from the Image Saver plugin format.
- Implemented methods to extract prompts, negative prompts, and LoRA information, including weights and hashes.
- Enhanced error handling and logging for metadata parsing failures.
- Updated RecipeParserFactory to include ImageSaverMetadataParser for relevant user comments.
2025-03-28 03:27:35 +08:00
Will Miao
4e0473ce11 Fix redownloading loras issue 2025-03-28 02:53:30 +08:00
Will Miao
450592b0d4 Implement Civitai data population methods for LoRA and checkpoint entries
- Added `populate_lora_from_civitai` and `populate_checkpoint_from_civitai` methods to enhance the extraction of model information from Civitai API responses.
- These methods populate LoRA and checkpoint entries with relevant data such as model name, version, thumbnail URL, base model, download URL, and file details.
- Improved error handling and logging for scenarios where models are not found or data retrieval fails.
- Refactored existing code to utilize the new methods, streamlining the process of fetching and updating LoRA and checkpoint metadata.
2025-03-28 02:16:53 +08:00
Will Miao
7cae0ee169 Enhance LoraModal to include image metadata panel
- Added a new image metadata panel to display generation parameters and prompts for images and videos.
- Implemented styles for the metadata panel in lora-modal.css, ensuring it is responsive and visually integrated.
- Introduced functionality to copy prompts to the clipboard and handle metadata interactions within the modal.
- Updated media rendering logic in LoraModal.js to incorporate metadata display and improve user experience.
2025-03-27 20:09:48 +08:00
Will Miao
ecd0e05f79 Add MetaFormatParser for Lora_N Model hash format metadata handling
- Introduced MetaFormatParser class to parse metadata from images with Lora_N Model hash format.
- Implemented methods to validate metadata structure, extract prompts, negative prompts, and LoRA information.
- Enhanced error handling and logging for metadata parsing failures.
- Updated RecipeParserFactory to include MetaFormatParser for relevant user comments.
2025-03-27 17:28:11 +08:00
Will Miao
6e3b4178ac Enhance LoraStacker to return active LoRAs in stack_loras method
- Updated RETURN_TYPES and RETURN_NAMES to include active LoRAs.
- Introduced active_loras list to track active LoRAs and their strengths.
- Formatted active_loras for return as a string in the format <lora:lora_name:strength>.
2025-03-27 16:10:50 +08:00
Will Miao
ba18cbabfd Add ComfyMetadataParser for Civitai ComfyUI metadata handling
- Introduced ComfyMetadataParser class to parse metadata from Civitai ComfyUI JSON format.
- Implemented methods to validate metadata structure, extract LoRA and checkpoint information, and retrieve additional model details from Civitai.
- Enhanced error handling and logging for metadata parsing failures.
- Updated RecipeParserFactory to prioritize ComfyMetadataParser for valid JSON inputs.
2025-03-27 15:43:58 +08:00
Will Miao
dec757c23b Refactor image metadata handling in RecipeRoutes and ExifUtils
- Replaced the download function for images from Twitter to Civitai in recipe_routes.py.
- Updated metadata extraction from user comments to a more comprehensive image metadata extraction method in ExifUtils.
- Enhanced the appending of recipe metadata to utilize the new metadata extraction method.
- Added a new utility function to download images from Civitai.
2025-03-27 14:56:37 +08:00
Will Miao
0459710c9b Made CLIP input optional in LoRA Loader, enabling compatibility with Hunyuan workflows 2025-03-26 21:50:26 +08:00
Will Miao
83582ef8a3 Refactor RecipeScanner to remove custom async timeout and streamline cache initialization
- Removed the custom async_timeout function and replaced it with direct usage of the initialization lock.
- Simplified the cache initialization process by eliminating the dependency on the lora scanner.
- Enhanced error handling during cache initialization to ensure a fallback to an empty cache on failure.
2025-03-26 18:56:18 +08:00
Will Miao
0dc396e148 Enhance RecipeModal to support video previews
- Updated RecipeModal.js to dynamically handle video and image previews based on the file type.
- Modified recipe-modal.css to ensure proper styling for both images and videos.
- Adjusted recipe_modal.html to accommodate the new media handling structure.
2025-03-26 16:39:53 +08:00
pixelpaws
86958e1420 Merge pull request #51 from AlUlkesh/main
Python < 3.11 backward compatibility for timeout.
2025-03-26 10:47:23 +08:00
Will Miao
c5b8e629fb Enhance save functionality in LoraModal for base model editing
- Added a check to prevent saving if the base model value has not changed.
- Stored the original value during editing to compare with the new selection.
- Updated the saveBaseModel function to accept the original value for comparison.
2025-03-26 07:05:32 +08:00
Will Miao
b0a495b4f6 Add base model editing functionality to LoraModal
- Introduced new styles for base model display and editing in lora-modal.css.
- Enhanced LoraModal.js to support editing of the base model with a dropdown selector.
- Implemented save functionality for the updated base model, including UI interactions for editing and saving changes.
2025-03-26 06:49:33 +08:00
Will Miao
7d2809467b Update tutorial video link 2025-03-25 14:10:13 +08:00
AlUlkesh
509e513f3a Python < 3.11 backward compatibility for timeout. 2025-03-24 14:16:46 +01:00
22 changed files with 1742 additions and 290 deletions

View File

@@ -14,11 +14,19 @@ A comprehensive toolset that streamlines organizing, downloading, and applying L
Watch this quick tutorial to learn how to use the new one-click LoRA integration feature:
[![One-Click LoRA Integration Tutorial](https://img.youtube.com/vi/qS95OjX3e70/0.jpg)](https://youtu.be/qS95OjX3e70)
[![LoRA Manager v0.8.0 - New Recipe Feature & Bulk Operations](https://img.youtube.com/vi/noN7f_ER7yo/0.jpg)](https://youtu.be/noN7f_ER7yo)
---
## Release Notes
### v0.8.1
* **Base Model Correction** - Added support for modifying base model associations to fix incorrect metadata for non-CivitAI LoRAs
* **LoRA Loader Flexibility** - Made CLIP input optional for model-only workflows like Hunyuan video generation
* **Expanded Recipe Support** - Added compatibility with 3 additional recipe metadata formats
* **Enhanced Showcase Images** - Generation parameters now displayed alongside LoRA preview images
* **UI Improvements & Bug Fixes** - Various interface refinements and stability enhancements
### v0.8.0
* **Introduced LoRA Recipes** - Create, import, save, and share your favorite LoRA combinations
* **Recipe Management System** - Easily browse, search, and organize your LoRA recipes

View File

@@ -18,7 +18,7 @@ class LoraManagerLoader:
return {
"required": {
"model": ("MODEL",),
"clip": ("CLIP",),
# "clip": ("CLIP",),
"text": (IO.STRING, {
"multiline": True,
"dynamicPrompts": True,
@@ -75,11 +75,12 @@ class LoraManagerLoader:
logger.warning(f"Unexpected loras format: {type(loras_data)}")
return []
def load_loras(self, model, clip, text, **kwargs):
def load_loras(self, model, text, **kwargs):
"""Loads multiple LoRAs based on the kwargs input and lora_stack."""
loaded_loras = []
all_trigger_words = []
clip = kwargs.get('clip', None)
lora_stack = kwargs.get('lora_stack', None)
# First process lora_stack if available
if lora_stack:

View File

@@ -26,8 +26,8 @@ class LoraStacker:
"optional": FlexibleOptionalInputType(any_type),
}
RETURN_TYPES = ("LORA_STACK", IO.STRING)
RETURN_NAMES = ("LORA_STACK", "trigger_words")
RETURN_TYPES = ("LORA_STACK", IO.STRING, IO.STRING)
RETURN_NAMES = ("LORA_STACK", "trigger_words", "active_loras")
FUNCTION = "stack_loras"
async def get_lora_info(self, lora_name):
@@ -75,6 +75,7 @@ class LoraStacker:
def stack_loras(self, text, **kwargs):
"""Stacks multiple LoRAs based on the kwargs input without loading them."""
stack = []
active_loras = []
all_trigger_words = []
# Process existing lora_stack if available
@@ -103,11 +104,15 @@ class LoraStacker:
# Add to stack without loading
# replace '/' with os.sep to avoid different OS path format
stack.append((lora_path.replace('/', os.sep), model_strength, clip_strength))
active_loras.append((lora_name, model_strength))
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
# use ',, ' to separate trigger words for group mode
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format active_loras as <lora:lora_name:strength> separated by spaces
active_loras_text = " ".join([f"<lora:{name}:{str(strength).strip()}>"
for name, strength in active_loras])
return (stack, trigger_words_text)
return (stack, trigger_words_text, active_loras_text)

View File

@@ -14,7 +14,7 @@ from ..services.recipe_scanner import RecipeScanner
from ..services.lora_scanner import LoraScanner
from ..config import config
from ..workflow.parser import WorkflowParser
from ..utils.utils import download_twitter_image
from ..utils.utils import download_civitai_image
logger = logging.getLogger(__name__)
@@ -235,7 +235,7 @@ class RecipeRoutes:
}, status=400)
# Download image from URL
temp_path = download_twitter_image(url)
temp_path = download_civitai_image(url)
if not temp_path:
return web.json_response({
@@ -244,10 +244,10 @@ class RecipeRoutes:
}, status=400)
# Extract metadata from the image using ExifUtils
user_comment = ExifUtils.extract_user_comment(temp_path)
metadata = ExifUtils.extract_image_metadata(temp_path)
# If no metadata found, return a more specific error
if not user_comment:
if not metadata:
result = {
"error": "No metadata found in this image",
"loras": [] # Return empty loras array to prevent client-side errors
@@ -262,7 +262,7 @@ class RecipeRoutes:
return web.json_response(result, status=200)
# Use the parser factory to get the appropriate parser
parser = RecipeParserFactory.create_parser(user_comment)
parser = RecipeParserFactory.create_parser(metadata)
if parser is None:
result = {
@@ -280,7 +280,7 @@ class RecipeRoutes:
# Parse the metadata
result = await parser.parse_metadata(
user_comment,
metadata,
recipe_scanner=self.recipe_scanner,
civitai_client=self.civitai_client
)
@@ -387,8 +387,7 @@ class RecipeRoutes:
return web.json_response({"error": f"Invalid base64 image data: {str(e)}"}, status=400)
elif image_url:
# Download image from URL
from ..utils.utils import download_twitter_image
temp_path = download_twitter_image(image_url)
temp_path = download_civitai_image(image_url)
if not temp_path:
return web.json_response({"error": "Failed to download image from URL"}, status=400)

View File

@@ -76,9 +76,15 @@ class CivitaiClient:
headers = self._get_request_headers()
async with session.get(url, headers=headers, allow_redirects=True) as response:
if response.status != 200:
# Handle early access 401 unauthorized responses
# Handle 401 unauthorized responses
if response.status == 401:
logger.warning(f"Unauthorized access to resource: {url} (Status 401)")
# Check if this is an API key issue (has Set-Cookie headers)
if 'Set-Cookie' in response.headers:
return False, "Invalid or missing CivitAI API key. Please check your API key in settings."
# Otherwise it's an early access restriction
return False, "Early access restriction: You must purchase early access to download this LoRA."
# Handle other client errors that might be permission-related
@@ -251,4 +257,4 @@ class CivitaiClient:
return None
except Exception as e:
logger.error(f"Error getting hash from Civitai: {e}")
return None
return None

View File

@@ -29,7 +29,7 @@ class DownloadManager:
return {'success': False, 'error': 'Failed to fetch model metadata'}
# Check if this is an early access LoRA
if 'earlyAccessEndsAt' in version_info:
if version_info.get('earlyAccessEndsAt'):
early_access_date = version_info.get('earlyAccessEndsAt', '')
# Convert to a readable date if possible
try:

View File

@@ -2,9 +2,7 @@ import os
import logging
import asyncio
import json
import re
from typing import List, Dict, Optional, Any
from datetime import datetime
from ..config import config
from .recipe_cache import RecipeCache
from .lora_scanner import LoraScanner
@@ -64,61 +62,44 @@ class RecipeScanner:
# Try to acquire the lock with a timeout to prevent deadlocks
try:
# Use a timeout for acquiring the lock
async with asyncio.timeout(1.0):
async with self._initialization_lock:
# Check again after acquiring the lock
if self._cache is not None and not force_refresh:
return self._cache
async with self._initialization_lock:
# Check again after acquiring the lock
if self._cache is not None and not force_refresh:
return self._cache
# Mark as initializing to prevent concurrent initializations
self._is_initializing = True
try:
# Remove dependency on lora scanner initialization
# Scan for recipe data directly
raw_data = await self.scan_all_recipes()
# Mark as initializing to prevent concurrent initializations
self._is_initializing = True
# Update cache
self._cache = RecipeCache(
raw_data=raw_data,
sorted_by_name=[],
sorted_by_date=[]
)
try:
# First ensure the lora scanner is initialized
if self._lora_scanner:
try:
lora_cache = await asyncio.wait_for(
self._lora_scanner.get_cached_data(),
timeout=10.0
)
except asyncio.TimeoutError:
logger.error("Timeout waiting for lora scanner initialization")
except Exception as e:
logger.error(f"Error waiting for lora scanner: {e}")
# Scan for recipe data
raw_data = await self.scan_all_recipes()
# Update cache
self._cache = RecipeCache(
raw_data=raw_data,
sorted_by_name=[],
sorted_by_date=[]
)
# Resort cache
await self._cache.resort()
return self._cache
# Resort cache
await self._cache.resort()
except Exception as e:
logger.error(f"Recipe Manager: Error initializing cache: {e}", exc_info=True)
# Create empty cache on error
self._cache = RecipeCache(
raw_data=[],
sorted_by_name=[],
sorted_by_date=[]
)
return self._cache
finally:
# Mark initialization as complete
self._is_initializing = False
return self._cache
except Exception as e:
logger.error(f"Recipe Manager: Error initializing cache: {e}", exc_info=True)
# Create empty cache on error
self._cache = RecipeCache(
raw_data=[],
sorted_by_name=[],
sorted_by_date=[]
)
return self._cache
finally:
# Mark initialization as complete
self._is_initializing = False
except asyncio.TimeoutError:
# If we can't acquire the lock in time, return the current cache or an empty one
logger.warning("Timeout acquiring initialization lock - returning current cache state")
return self._cache or RecipeCache(raw_data=[], sorted_by_name=[], sorted_by_date=[])
except Exception as e:
logger.error(f"Unexpected error in get_cached_data: {e}")
return self._cache or RecipeCache(raw_data=[], sorted_by_name=[], sorted_by_date=[])
@@ -448,4 +429,4 @@ class RecipeScanner:
'total_pages': (total_items + page_size - 1) // page_size
}
return result
return result

View File

@@ -45,6 +45,63 @@ class ExifUtils:
except Exception as e:
return None
@staticmethod
def extract_image_metadata(image_path: str) -> Optional[str]:
"""Extract metadata from image including UserComment or parameters field
Args:
image_path (str): Path to the image file
Returns:
Optional[str]: Extracted metadata or None if not found
"""
try:
# First try to open the image
with Image.open(image_path) as img:
# Method 1: Check for parameters in image info
if hasattr(img, 'info') and 'parameters' in img.info:
return img.info['parameters']
# Method 2: Check EXIF UserComment field
if img.format not in ['JPEG', 'TIFF', 'WEBP']:
# For non-JPEG/TIFF/WEBP images, try to get EXIF through PIL
exif = img._getexif()
if exif and piexif.ExifIFD.UserComment in exif:
user_comment = exif[piexif.ExifIFD.UserComment]
if isinstance(user_comment, bytes):
if user_comment.startswith(b'UNICODE\0'):
return user_comment[8:].decode('utf-16be')
return user_comment.decode('utf-8', errors='ignore')
return user_comment
# For JPEG/TIFF/WEBP, use piexif
try:
exif_dict = piexif.load(image_path)
if piexif.ExifIFD.UserComment in exif_dict.get('Exif', {}):
user_comment = exif_dict['Exif'][piexif.ExifIFD.UserComment]
if isinstance(user_comment, bytes):
if user_comment.startswith(b'UNICODE\0'):
user_comment = user_comment[8:].decode('utf-16be')
else:
user_comment = user_comment.decode('utf-8', errors='ignore')
return user_comment
except Exception as e:
logger.debug(f"Error loading EXIF data: {e}")
# Method 3: Check PNG metadata for workflow info (for ComfyUI images)
if img.format == 'PNG':
# Look for workflow or prompt metadata in PNG chunks
for key in img.info:
if key in ['workflow', 'prompt', 'parameters']:
return img.info[key]
return None
except Exception as e:
logger.error(f"Error extracting image metadata: {e}", exc_info=True)
return None
@staticmethod
def update_user_comment(image_path: str, user_comment: str) -> str:
@@ -92,18 +149,78 @@ class ExifUtils:
except Exception as e:
logger.error(f"Error updating EXIF data in {image_path}: {e}")
return image_path
@staticmethod
def update_image_metadata(image_path: str, metadata: str) -> str:
"""Update metadata in image's EXIF data or parameters fields
Args:
image_path (str): Path to the image file
metadata (str): Metadata string to save
Returns:
str: Path to the updated image
"""
try:
# Load the image and check its format
with Image.open(image_path) as img:
img_format = img.format
# For PNG, try to update parameters directly
if img_format == 'PNG':
# We'll save with parameters in the PNG info
info_dict = {'parameters': metadata}
img.save(image_path, format='PNG', pnginfo=info_dict)
return image_path
# For WebP format, use PIL's exif parameter directly
elif img_format == 'WEBP':
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
exif_bytes = piexif.dump(exif_dict)
# Save with the exif data
img.save(image_path, format='WEBP', exif=exif_bytes, quality=85)
return image_path
# For other formats, use standard EXIF approach
else:
try:
exif_dict = piexif.load(img.info.get('exif', b''))
except:
exif_dict = {'0th':{}, 'Exif':{}, 'GPS':{}, 'Interop':{}, '1st':{}}
# If no Exif dictionary exists, create one
if 'Exif' not in exif_dict:
exif_dict['Exif'] = {}
# Update the UserComment field - use UNICODE format
unicode_bytes = metadata.encode('utf-16be')
metadata_bytes = b'UNICODE\0' + unicode_bytes
exif_dict['Exif'][piexif.ExifIFD.UserComment] = metadata_bytes
# Convert EXIF dict back to bytes
exif_bytes = piexif.dump(exif_dict)
# Save the image with updated EXIF data
img.save(image_path, exif=exif_bytes)
return image_path
except Exception as e:
logger.error(f"Error updating metadata in {image_path}: {e}")
return image_path
@staticmethod
def append_recipe_metadata(image_path, recipe_data) -> str:
"""Append recipe metadata to an image's EXIF data"""
try:
# First, extract existing user comment
user_comment = ExifUtils.extract_user_comment(image_path)
# First, extract existing metadata
metadata = ExifUtils.extract_image_metadata(image_path)
# Check if there's already recipe metadata in the user comment
if user_comment:
# Check if there's already recipe metadata
if metadata:
# Remove any existing recipe metadata
user_comment = ExifUtils.remove_recipe_metadata(user_comment)
metadata = ExifUtils.remove_recipe_metadata(metadata)
# Prepare simplified loras data
simplified_loras = []
@@ -133,11 +250,11 @@ class ExifUtils:
# Create the recipe metadata marker
recipe_metadata_marker = f"Recipe metadata: {recipe_metadata_json}"
# Append to existing user comment or create new one
new_user_comment = f"{user_comment} \n {recipe_metadata_marker}" if user_comment else recipe_metadata_marker
# Append to existing metadata or create new one
new_metadata = f"{metadata} \n {recipe_metadata_marker}" if metadata else recipe_metadata_marker
# Write back to the image
return ExifUtils.update_user_comment(image_path, new_user_comment)
return ExifUtils.update_image_metadata(image_path, new_metadata)
except Exception as e:
logger.error(f"Error appending recipe metadata: {e}", exc_info=True)
return image_path
@@ -184,11 +301,11 @@ class ExifUtils:
"""
try:
# Extract metadata if needed
user_comment = None
metadata = None
if preserve_metadata:
if isinstance(image_data, str) and os.path.exists(image_data):
# It's a file path
user_comment = ExifUtils.extract_user_comment(image_data)
metadata = ExifUtils.extract_image_metadata(image_data)
img = Image.open(image_data)
else:
# It's binary data
@@ -199,7 +316,7 @@ class ExifUtils:
with tempfile.NamedTemporaryFile(suffix='.jpg', delete=False) as temp_file:
temp_path = temp_file.name
temp_file.write(image_data)
user_comment = ExifUtils.extract_user_comment(temp_path)
metadata = ExifUtils.extract_image_metadata(temp_path)
os.unlink(temp_path)
else:
# Just open the image without extracting metadata
@@ -239,14 +356,14 @@ class ExifUtils:
optimized_data = output.getvalue()
# If we need to preserve metadata, write it to a temporary file
if preserve_metadata and user_comment:
if preserve_metadata and metadata:
# For WebP format, we'll directly save with metadata
if format.lower() == 'webp':
# Create a new BytesIO with metadata
output_with_metadata = BytesIO()
# Create EXIF data with user comment
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + user_comment.encode('utf-16be')}}
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
exif_bytes = piexif.dump(exif_dict)
# Save with metadata
@@ -260,7 +377,7 @@ class ExifUtils:
temp_file.write(optimized_data)
# Add the metadata back
ExifUtils.update_user_comment(temp_path, user_comment)
ExifUtils.update_image_metadata(temp_path, metadata)
# Read the file with metadata
with open(temp_path, 'rb') as f:
@@ -466,14 +583,14 @@ class ExifUtils:
workflow_data = img.info[key]
break
# If no workflow data found in PNG chunks, try EXIF as fallback
# If no workflow data found in PNG chunks, try extract_image_metadata as fallback
if not workflow_data:
user_comment = ExifUtils.extract_user_comment(image_path)
if user_comment and '{' in user_comment and '}' in user_comment:
metadata = ExifUtils.extract_image_metadata(image_path)
if metadata and '{' in metadata and '}' in metadata:
# Try to extract JSON part
json_start = user_comment.find('{')
json_end = user_comment.rfind('}') + 1
workflow_data = user_comment[json_start:json_end]
json_start = metadata.find('{')
json_end = metadata.rfind('}') + 1
workflow_data = metadata[json_start:json_end]
# Parse workflow data if found
if workflow_data:

View File

@@ -44,6 +44,138 @@ class RecipeMetadataParser(ABC):
Dict containing parsed recipe data with standardized format
"""
pass
async def populate_lora_from_civitai(self, lora_entry: Dict[str, Any], civitai_info: Dict[str, Any],
recipe_scanner=None, base_model_counts=None, hash_value=None) -> Dict[str, Any]:
"""
Populate a lora entry with information from Civitai API response
Args:
lora_entry: The lora entry to populate
civitai_info: The response from Civitai API
recipe_scanner: Optional recipe scanner for local file lookup
base_model_counts: Optional dict to track base model counts
hash_value: Optional hash value to use if not available in civitai_info
Returns:
The populated lora_entry dict
"""
try:
if civitai_info and civitai_info.get("error") != "Model not found":
# Check if this is an early access lora
if civitai_info.get('earlyAccessEndsAt'):
# Convert earlyAccessEndsAt to a human-readable date
early_access_date = civitai_info.get('earlyAccessEndsAt', '')
lora_entry['isEarlyAccess'] = True
lora_entry['earlyAccessEndsAt'] = early_access_date
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
lora_entry['name'] = civitai_info['model']['name']
# Update version if available
if 'name' in civitai_info:
lora_entry['version'] = civitai_info.get('name', '')
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
current_base_model = civitai_info.get('baseModel', '')
lora_entry['baseModel'] = current_base_model
# Update base model counts if tracking them
if base_model_counts is not None and current_base_model:
base_model_counts[current_base_model] = base_model_counts.get(current_base_model, 0) + 1
# Get download URL
lora_entry['downloadUrl'] = civitai_info.get('downloadUrl', '')
# Process file information if available
if 'files' in civitai_info:
model_file = next((file for file in civitai_info.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
# Get size
lora_entry['size'] = model_file.get('sizeKB', 0) * 1024
# Get SHA256 hash
sha256 = model_file.get('hashes', {}).get('SHA256', hash_value)
if sha256:
lora_entry['hash'] = sha256.lower()
# Check if exists locally
if recipe_scanner and lora_entry['hash']:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_lora_hash(lora_entry['hash'])
if exists_locally:
try:
local_path = lora_scanner.get_lora_path_by_hash(lora_entry['hash'])
lora_entry['existsLocally'] = True
lora_entry['localPath'] = local_path
lora_entry['file_name'] = os.path.splitext(os.path.basename(local_path))[0]
# Get thumbnail from local preview if available
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data
if item['sha256'].lower() == lora_entry['hash'].lower()), None)
if lora_item and 'preview_url' in lora_item:
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
except Exception as e:
logger.error(f"Error getting local lora path: {e}")
else:
# For missing LoRAs, get file_name from model_file.name
file_name = model_file.get('name', '')
lora_entry['file_name'] = os.path.splitext(file_name)[0] if file_name else ''
else:
# Model not found or deleted
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
except Exception as e:
logger.error(f"Error populating lora from Civitai info: {e}")
return lora_entry
async def populate_checkpoint_from_civitai(self, checkpoint: Dict[str, Any], civitai_info: Dict[str, Any]) -> Dict[str, Any]:
"""
Populate checkpoint information from Civitai API response
Args:
checkpoint: The checkpoint entry to populate
civitai_info: The response from Civitai API
Returns:
The populated checkpoint dict
"""
try:
if civitai_info and civitai_info.get("error") != "Model not found":
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
checkpoint['name'] = civitai_info['model']['name']
# Update version if available
if 'name' in civitai_info:
checkpoint['version'] = civitai_info.get('name', '')
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
checkpoint['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
checkpoint['baseModel'] = civitai_info.get('baseModel', '')
# Get download URL
checkpoint['downloadUrl'] = civitai_info.get('downloadUrl', '')
else:
# Model not found or deleted
checkpoint['isDeleted'] = True
except Exception as e:
logger.error(f"Error populating checkpoint from Civitai info: {e}")
return checkpoint
class RecipeFormatParser(RecipeMetadataParser):
@@ -110,33 +242,14 @@ class RecipeFormatParser(RecipeMetadataParser):
if lora.get('modelVersionId') and civitai_client:
try:
civitai_info = await civitai_client.get_model_version_info(lora['modelVersionId'])
if civitai_info and civitai_info.get("error") != "Model not found":
# Check if this is an early access lora
if civitai_info.get('earlyAccessEndsAt'):
# Convert earlyAccessEndsAt to a human-readable date
early_access_date = civitai_info.get('earlyAccessEndsAt', '')
lora_entry['isEarlyAccess'] = True
lora_entry['earlyAccessEndsAt'] = early_access_date
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
lora_entry['baseModel'] = civitai_info.get('baseModel', '')
# Get download URL
lora_entry['downloadUrl'] = civitai_info.get('downloadUrl', '')
# Get size from files if available
if 'files' in civitai_info:
model_file = next((file for file in civitai_info.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
lora_entry['size'] = model_file.get('sizeKB', 0) * 1024
else:
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
None, # No need to track base model counts
lora['hash']
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
@@ -222,57 +335,16 @@ class StandardMetadataParser(RecipeMetadataParser):
# Get additional info from Civitai if client is available
if civitai_client:
civitai_info = await civitai_client.get_model_version_info(model_version_id)
# Check if this LoRA exists locally by SHA256 hash
if civitai_info and civitai_info.get("error") != "Model not found":
# Check if this is an early access lora
if civitai_info.get('earlyAccessEndsAt'):
# Convert earlyAccessEndsAt to a human-readable date
early_access_date = civitai_info.get('earlyAccessEndsAt', '')
lora_entry['isEarlyAccess'] = True
lora_entry['earlyAccessEndsAt'] = early_access_date
# LoRA exists on Civitai, process its information
if 'files' in civitai_info:
# Find the model file (type="Model") in the files list
model_file = next((file for file in civitai_info.get('files', [])
if file.get('type') == 'Model'), None)
if model_file and recipe_scanner:
sha256 = model_file.get('hashes', {}).get('SHA256', '')
if sha256:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_lora_hash(sha256)
if exists_locally:
local_path = lora_scanner.get_lora_path_by_hash(sha256)
lora_entry['existsLocally'] = True
lora_entry['localPath'] = local_path
lora_entry['file_name'] = os.path.splitext(os.path.basename(local_path))[0]
else:
# For missing LoRAs, get file_name from model_file.name
file_name = model_file.get('name', '')
lora_entry['file_name'] = os.path.splitext(file_name)[0] if file_name else ''
lora_entry['hash'] = sha256
lora_entry['size'] = model_file.get('sizeKB', 0) * 1024
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model and update counts
current_base_model = civitai_info.get('baseModel', '')
lora_entry['baseModel'] = current_base_model
if current_base_model:
base_model_counts[current_base_model] = base_model_counts.get(current_base_model, 0) + 1
# Get download URL
lora_entry['downloadUrl'] = civitai_info.get('downloadUrl', '')
else:
# LoRA is deleted from Civitai or not found
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
try:
civitai_info = await civitai_client.get_model_version_info(model_version_id)
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
loras.append(lora_entry)
@@ -394,7 +466,9 @@ class A1111MetadataParser(RecipeMetadataParser):
# Extract LoRA information from prompt
lora_weights = {}
lora_matches = re.findall(self.LORA_PATTERN, prompt)
for lora_name, weight in lora_matches:
for lora_name, weights in lora_matches:
# Take only the first strength value (before the colon)
weight = weights.split(':')[0]
lora_weights[lora_name.strip()] = float(weight.strip())
# Remove LoRA patterns from prompt
@@ -436,61 +510,18 @@ class A1111MetadataParser(RecipeMetadataParser):
'isDeleted': False
}
# Get info from Civitai by hash
if civitai_client:
# Get info from Civitai by hash if available
if civitai_client and hash_value:
try:
civitai_info = await civitai_client.get_model_by_hash(hash_value)
if civitai_info and civitai_info.get("error") != "Model not found":
# Check if this is an early access lora
if civitai_info.get('earlyAccessEndsAt'):
# Convert earlyAccessEndsAt to a human-readable date
early_access_date = civitai_info.get('earlyAccessEndsAt', '')
lora_entry['isEarlyAccess'] = True
lora_entry['earlyAccessEndsAt'] = early_access_date
# Get model version ID
lora_entry['id'] = civitai_info.get('id', '')
# Get model name and version
lora_entry['name'] = civitai_info.get('model', {}).get('name', lora_name)
lora_entry['version'] = civitai_info.get('name', '')
# Get thumbnail URL
if 'images' in civitai_info and civitai_info['images']:
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model and update counts
current_base_model = civitai_info.get('baseModel', '')
lora_entry['baseModel'] = current_base_model
if current_base_model:
base_model_counts[current_base_model] = base_model_counts.get(current_base_model, 0) + 1
# Get download URL
lora_entry['downloadUrl'] = civitai_info.get('downloadUrl', '')
# Get file name and size from Civitai
if 'files' in civitai_info:
model_file = next((file for file in civitai_info.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
file_name = model_file.get('name', '')
lora_entry['file_name'] = os.path.splitext(file_name)[0] if file_name else lora_name
lora_entry['size'] = model_file.get('sizeKB', 0) * 1024
# Update hash to sha256
lora_entry['hash'] = model_file.get('hashes', {}).get('SHA256', hash_value).lower()
# Check if exists locally with sha256 hash
if recipe_scanner and lora_entry['hash']:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_lora_hash(lora_entry['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'] == lora_entry['hash']), None)
if lora_item:
lora_entry['existsLocally'] = True
lora_entry['localPath'] = lora_item['file_path']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
hash_value
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {hash_value}: {e}")
@@ -523,6 +554,499 @@ class A1111MetadataParser(RecipeMetadataParser):
return {"error": str(e), "loras": []}
class ComfyMetadataParser(RecipeMetadataParser):
"""Parser for Civitai ComfyUI metadata JSON format"""
METADATA_MARKER = r"class_type"
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the ComfyUI metadata format"""
try:
data = json.loads(user_comment)
# Check if it contains class_type nodes typical of ComfyUI workflow
return isinstance(data, dict) and any(isinstance(v, dict) and 'class_type' in v for v in data.values())
except (json.JSONDecodeError, TypeError):
return False
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai ComfyUI metadata format"""
try:
data = json.loads(user_comment)
loras = []
# Find all LoraLoader nodes
lora_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'LoraLoader'}
if not lora_nodes:
return {"error": "No LoRA information found in this ComfyUI workflow", "loras": []}
# Process each LoraLoader node
for node_id, node in lora_nodes.items():
if 'inputs' not in node or 'lora_name' not in node['inputs']:
continue
lora_name = node['inputs'].get('lora_name', '')
# Parse the URN to extract model ID and version ID
# Format: "urn:air:sdxl:lora:civitai:1107767@1253442"
lora_id_match = re.search(r'civitai:(\d+)@(\d+)', lora_name)
if not lora_id_match:
continue
model_id = lora_id_match.group(1)
model_version_id = lora_id_match.group(2)
# Get strength from node inputs
weight = node['inputs'].get('strength_model', 1.0)
# Initialize lora entry with default values
lora_entry = {
'id': model_version_id,
'modelId': model_id,
'name': f"Lora {model_id}", # Default name
'version': '',
'type': 'lora',
'weight': weight,
'existsLocally': False,
'localPath': None,
'file_name': '',
'hash': '',
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get additional info from Civitai if client is available
if civitai_client:
try:
civitai_info = await civitai_client.get_model_version_info(model_version_id)
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
loras.append(lora_entry)
# Find checkpoint info
checkpoint_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'CheckpointLoaderSimple'}
checkpoint = None
checkpoint_id = None
checkpoint_version_id = None
if checkpoint_nodes:
# Get the first checkpoint node
checkpoint_node = next(iter(checkpoint_nodes.values()))
if 'inputs' in checkpoint_node and 'ckpt_name' in checkpoint_node['inputs']:
checkpoint_name = checkpoint_node['inputs']['ckpt_name']
# Parse checkpoint URN
checkpoint_match = re.search(r'civitai:(\d+)@(\d+)', checkpoint_name)
if checkpoint_match:
checkpoint_id = checkpoint_match.group(1)
checkpoint_version_id = checkpoint_match.group(2)
checkpoint = {
'id': checkpoint_version_id,
'modelId': checkpoint_id,
'name': f"Checkpoint {checkpoint_id}",
'version': '',
'type': 'checkpoint'
}
# Get additional checkpoint info from Civitai
if civitai_client:
try:
civitai_info = await civitai_client.get_model_version_info(checkpoint_version_id)
# Populate checkpoint with Civitai info
checkpoint = await self.populate_checkpoint_from_civitai(checkpoint, civitai_info)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint: {e}")
# Extract generation parameters
gen_params = {}
# First try to get from extraMetadata
if 'extraMetadata' in data:
try:
# extraMetadata is a JSON string that needs to be parsed
extra_metadata = json.loads(data['extraMetadata'])
# Map fields from extraMetadata to our standard format
mapping = {
'prompt': 'prompt',
'negativePrompt': 'negative_prompt',
'steps': 'steps',
'sampler': 'sampler',
'cfgScale': 'cfg_scale',
'seed': 'seed'
}
for src_key, dest_key in mapping.items():
if src_key in extra_metadata:
gen_params[dest_key] = extra_metadata[src_key]
# If size info is available, format as "width x height"
if 'width' in extra_metadata and 'height' in extra_metadata:
gen_params['size'] = f"{extra_metadata['width']}x{extra_metadata['height']}"
except Exception as e:
logger.error(f"Error parsing extraMetadata: {e}")
# If extraMetadata doesn't have all the info, try to get from nodes
if not gen_params or len(gen_params) < 3: # At least we want prompt, negative_prompt, and steps
# Find positive prompt node
positive_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and
v.get('class_type', '').endswith('CLIPTextEncode') and
v.get('_meta', {}).get('title') == 'Positive'}
if positive_nodes:
positive_node = next(iter(positive_nodes.values()))
if 'inputs' in positive_node and 'text' in positive_node['inputs']:
gen_params['prompt'] = positive_node['inputs']['text']
# Find negative prompt node
negative_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and
v.get('class_type', '').endswith('CLIPTextEncode') and
v.get('_meta', {}).get('title') == 'Negative'}
if negative_nodes:
negative_node = next(iter(negative_nodes.values()))
if 'inputs' in negative_node and 'text' in negative_node['inputs']:
gen_params['negative_prompt'] = negative_node['inputs']['text']
# Find KSampler node for other parameters
ksampler_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'KSampler'}
if ksampler_nodes:
ksampler_node = next(iter(ksampler_nodes.values()))
if 'inputs' in ksampler_node:
inputs = ksampler_node['inputs']
if 'sampler_name' in inputs:
gen_params['sampler'] = inputs['sampler_name']
if 'steps' in inputs:
gen_params['steps'] = inputs['steps']
if 'cfg' in inputs:
gen_params['cfg_scale'] = inputs['cfg']
if 'seed' in inputs:
gen_params['seed'] = inputs['seed']
# Determine base model from loras info
base_model = None
if loras:
# Use the most common base model from loras
base_models = [lora['baseModel'] for lora in loras if lora.get('baseModel')]
if base_models:
from collections import Counter
base_model_counts = Counter(base_models)
base_model = base_model_counts.most_common(1)[0][0]
return {
'base_model': base_model,
'loras': loras,
'checkpoint': checkpoint,
'gen_params': gen_params,
'from_comfy_metadata': True
}
except Exception as e:
logger.error(f"Error parsing ComfyUI metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}
class MetaFormatParser(RecipeMetadataParser):
"""Parser for images with meta format metadata (Lora_N Model hash format)"""
METADATA_MARKER = r'Lora_\d+ Model hash:'
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
return re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from images with meta format metadata"""
try:
# Extract prompt and negative prompt
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
# Initialize metadata
metadata = {"prompt": prompt, "loras": []}
# Extract negative prompt and parameters if available
if len(parts) > 1:
negative_and_params = parts[1]
# Extract negative prompt - everything until the first parameter (usually "Steps:")
param_start = re.search(r'([A-Za-z]+): ', negative_and_params)
if param_start:
neg_prompt = negative_and_params[:param_start.start()].strip()
metadata["negative_prompt"] = neg_prompt
params_section = negative_and_params[param_start.start():]
else:
params_section = negative_and_params
# Extract key-value parameters (Steps, Sampler, Seed, etc.)
param_pattern = r'([A-Za-z_0-9 ]+): ([^,]+)'
params = re.findall(param_pattern, params_section)
for key, value in params:
clean_key = key.strip().lower().replace(' ', '_')
metadata[clean_key] = value.strip()
# Extract LoRA information
# Pattern to match lora entries: Lora_0 Model name: ArtVador I.safetensors, Lora_0 Model hash: 08f7133a58, etc.
lora_pattern = r'Lora_(\d+) Model name: ([^,]+), Lora_\1 Model hash: ([^,]+), Lora_\1 Strength model: ([^,]+), Lora_\1 Strength clip: ([^,]+)'
lora_matches = re.findall(lora_pattern, user_comment)
# If the regular pattern doesn't match, try a more flexible approach
if not lora_matches:
# First find all Lora indices
lora_indices = set(re.findall(r'Lora_(\d+)', user_comment))
# For each index, extract the information
for idx in lora_indices:
lora_info = {}
# Extract model name
name_match = re.search(f'Lora_{idx} Model name: ([^,]+)', user_comment)
if name_match:
lora_info['name'] = name_match.group(1).strip()
# Extract model hash
hash_match = re.search(f'Lora_{idx} Model hash: ([^,]+)', user_comment)
if hash_match:
lora_info['hash'] = hash_match.group(1).strip()
# Extract strength model
strength_model_match = re.search(f'Lora_{idx} Strength model: ([^,]+)', user_comment)
if strength_model_match:
lora_info['strength_model'] = float(strength_model_match.group(1).strip())
# Extract strength clip
strength_clip_match = re.search(f'Lora_{idx} Strength clip: ([^,]+)', user_comment)
if strength_clip_match:
lora_info['strength_clip'] = float(strength_clip_match.group(1).strip())
# Only add if we have at least name and hash
if 'name' in lora_info and 'hash' in lora_info:
lora_matches.append((idx, lora_info['name'], lora_info['hash'],
str(lora_info.get('strength_model', 1.0)),
str(lora_info.get('strength_clip', 1.0))))
# Process LoRAs
base_model_counts = {}
loras = []
for match in lora_matches:
if len(match) == 5: # Regular pattern match
idx, name, hash_value, strength_model, strength_clip = match
else: # Flexible approach match
continue # Should not happen now
# Clean up the values
name = name.strip()
if name.endswith('.safetensors'):
name = name[:-12] # Remove .safetensors extension
hash_value = hash_value.strip()
weight = float(strength_model) # Use model strength as weight
# Initialize lora entry with default values
lora_entry = {
'name': name,
'type': 'lora',
'weight': weight,
'existsLocally': False,
'localPath': None,
'file_name': name,
'hash': hash_value,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get info from Civitai by hash if available
if civitai_client and hash_value:
try:
civitai_info = await civitai_client.get_model_by_hash(hash_value)
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
hash_value
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {hash_value}: {e}")
loras.append(lora_entry)
# Extract model information
model = None
if 'model' in metadata:
model = metadata['model']
# Set base_model to the most common one from civitai_info
base_model = None
if base_model_counts:
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Extract generation parameters for recipe metadata
gen_params = {}
for key in GEN_PARAM_KEYS:
if key in metadata:
gen_params[key] = metadata.get(key, '')
# Try to extract size information if available
if 'width' in metadata and 'height' in metadata:
gen_params['size'] = f"{metadata['width']}x{metadata['height']}"
return {
'base_model': base_model,
'loras': loras,
'gen_params': gen_params,
'raw_metadata': metadata,
'from_meta_format': True
}
except Exception as e:
logger.error(f"Error parsing meta format metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}
class ImageSaverMetadataParser(RecipeMetadataParser):
"""Parser for ComfyUI Image Saver plugin metadata format"""
METADATA_MARKER = r'Hashes: \{"LORA:'
LORA_PATTERN = r'<lora:([^:]+):([^>]+)>'
HASH_PATTERN = r'Hashes: (\{.*?\})'
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the Image Saver metadata format"""
return re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Image Saver plugin format"""
try:
# Extract prompt and negative prompt
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
# Initialize metadata
metadata = {"prompt": prompt, "loras": []}
# Extract negative prompt and parameters
if len(parts) > 1:
negative_and_params = parts[1]
# Extract negative prompt
if "Steps:" in negative_and_params:
neg_prompt = negative_and_params.split("Steps:", 1)[0].strip()
metadata["negative_prompt"] = neg_prompt
# Extract key-value parameters (Steps, Sampler, CFG scale, etc.)
param_pattern = r'([A-Za-z ]+): ([^,]+)'
params = re.findall(param_pattern, negative_and_params)
for key, value in params:
clean_key = key.strip().lower().replace(' ', '_')
metadata[clean_key] = value.strip()
# Extract LoRA information from prompt
lora_weights = {}
lora_matches = re.findall(self.LORA_PATTERN, prompt)
for lora_name, weight in lora_matches:
lora_weights[lora_name.strip()] = float(weight.split(':')[0].strip())
# Remove LoRA patterns from prompt
metadata["prompt"] = re.sub(self.LORA_PATTERN, '', prompt).strip()
# Extract LoRA hashes from Hashes section
lora_hashes = {}
hash_match = re.search(self.HASH_PATTERN, user_comment)
if hash_match:
try:
hashes = json.loads(hash_match.group(1))
for key, hash_value in hashes.items():
if key.startswith('LORA:'):
lora_name = key[5:] # Remove 'LORA:' prefix
lora_hashes[lora_name] = hash_value.strip()
except json.JSONDecodeError:
pass
# Process LoRAs and collect base models
base_model_counts = {}
loras = []
# Process each LoRA with hash and weight
for lora_name, hash_value in lora_hashes.items():
weight = lora_weights.get(lora_name, 1.0)
# Initialize lora entry with default values
lora_entry = {
'name': lora_name,
'type': 'lora',
'weight': weight,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'hash': hash_value,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get info from Civitai by hash if available
if civitai_client and hash_value:
try:
civitai_info = await civitai_client.get_model_by_hash(hash_value)
# Populate lora entry with Civitai info
lora_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
hash_value
)
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {hash_value}: {e}")
loras.append(lora_entry)
# Set base_model to the most common one from civitai_info
base_model = None
if base_model_counts:
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Extract generation parameters for recipe metadata
gen_params = {}
for key in GEN_PARAM_KEYS:
if key in metadata:
gen_params[key] = metadata.get(key, '')
# Add model information if available
if 'model' in metadata:
gen_params['checkpoint'] = metadata['model']
return {
'base_model': base_model,
'loras': loras,
'gen_params': gen_params,
'raw_metadata': metadata
}
except Exception as e:
logger.error(f"Error parsing Image Saver metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}
class RecipeParserFactory:
"""Factory for creating recipe metadata parsers"""
@@ -537,11 +1061,23 @@ class RecipeParserFactory:
Returns:
Appropriate RecipeMetadataParser implementation
"""
# Try ComfyMetadataParser first since it requires valid JSON
try:
if ComfyMetadataParser().is_metadata_matching(user_comment):
return ComfyMetadataParser()
except Exception:
# If JSON parsing fails, move on to other parsers
pass
if RecipeFormatParser().is_metadata_matching(user_comment):
return RecipeFormatParser()
elif StandardMetadataParser().is_metadata_matching(user_comment):
return StandardMetadataParser()
elif A1111MetadataParser().is_metadata_matching(user_comment):
return A1111MetadataParser()
elif MetaFormatParser().is_metadata_matching(user_comment):
return MetaFormatParser()
elif ImageSaverMetadataParser().is_metadata_matching(user_comment):
return ImageSaverMetadataParser()
else:
return None
return None

View File

@@ -40,7 +40,45 @@ def download_twitter_image(url):
except Exception as e:
print(f"Error downloading twitter image: {e}")
return None
def download_civitai_image(url):
"""Download image from a URL containing avatar image with specific class and style attributes
Args:
url (str): The URL to download image from
Returns:
str: Path to downloaded temporary image file
"""
try:
# Download page content
response = requests.get(url)
response.raise_for_status()
# Parse HTML
soup = BeautifulSoup(response.text, 'html.parser')
# Find image with specific class and style attributes
image = soup.select_one('img.EdgeImage_image__iH4_q.max-h-full.w-auto.max-w-full')
if not image or 'src' not in image.attrs:
return None
image_url = image['src']
# Download image
image_response = requests.get(image_url)
image_response.raise_for_status()
# Save to temp file
with tempfile.NamedTemporaryFile(delete=False, suffix='.jpg') as temp_file:
temp_file.write(image_response.content)
return temp_file.name
except Exception as e:
print(f"Error downloading civitai avatar: {e}")
return None
def fuzzy_match(text: str, pattern: str, threshold: float = 0.7) -> bool:
"""
Check if text matches pattern using fuzzy matching.

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-lora-manager"
description = "LoRA Manager for ComfyUI - Access it at http://localhost:8188/loras for managing LoRA models with previews and metadata integration."
version = "0.8.0"
version = "0.8.1"
license = {file = "LICENSE"}
dependencies = [
"aiohttp",

View File

@@ -95,7 +95,6 @@
"onSite": false,
"remixOfId": null
}
// more images here
],
"downloadUrl": "https://civitai.com/api/download/models/1387174"
}

View File

@@ -0,0 +1,153 @@
{
"resource-stack": {
"class_type": "CheckpointLoaderSimple",
"inputs": { "ckpt_name": "urn:air:sdxl:checkpoint:civitai:827184@1410435" }
},
"resource-stack-1": {
"class_type": "LoraLoader",
"inputs": {
"lora_name": "urn:air:sdxl:lora:civitai:1107767@1253442",
"strength_model": 1,
"strength_clip": 1,
"model": ["resource-stack", 0],
"clip": ["resource-stack", 1]
}
},
"resource-stack-2": {
"class_type": "LoraLoader",
"inputs": {
"lora_name": "urn:air:sdxl:lora:civitai:1342708@1516344",
"strength_model": 1,
"strength_clip": 1,
"model": ["resource-stack-1", 0],
"clip": ["resource-stack-1", 1]
}
},
"resource-stack-3": {
"class_type": "LoraLoader",
"inputs": {
"lora_name": "urn:air:sdxl:lora:civitai:122359@135867",
"strength_model": 1.55,
"strength_clip": 1,
"model": ["resource-stack-2", 0],
"clip": ["resource-stack-2", 1]
}
},
"6": {
"class_type": "smZ CLIPTextEncode",
"inputs": {
"text": "masterpiece, best quality, amazing quality, detailed setting, detailed background, 1girl, yunyun (konosuba), nude, red eyes, hair ornament, braid, hair between eyes,low twintails, pink ribbon, bow, hair bow, pussy, frilled skirt, layered skirt, belt, pink thighhighs, (pussy juice), large insertion, vaginal tugging, pussy grip, detailed skin, detailed soles, stretched pussy, feet in stockings, ass, nipples, medium breasts, french kiss, anus, shocked, nervous, penis awe, BREAK Professor\u0027s office, college student, pornographic, 1boy, close eyes, (musscular male, detailed large cock), vaginal sex, college office setting, ass grab, fucking, riding, cowgirl, erotic, side view, deep fucking",
"parser": "comfy",
"text_g": "",
"text_l": "",
"ascore": 2.5,
"width": 0,
"height": 0,
"crop_w": 0,
"crop_h": 0,
"target_width": 0,
"target_height": 0,
"smZ_steps": 1,
"mean_normalization": true,
"multi_conditioning": true,
"use_old_emphasis_implementation": false,
"with_SDXL": false,
"clip": ["resource-stack-3", 1]
},
"_meta": { "title": "Positive" }
},
"7": {
"class_type": "smZ CLIPTextEncode",
"inputs": {
"text": "bad quality,worst quality,worst detail,sketch,censor",
"parser": "comfy",
"text_g": "",
"text_l": "",
"ascore": 2.5,
"width": 0,
"height": 0,
"crop_w": 0,
"crop_h": 0,
"target_width": 0,
"target_height": 0,
"smZ_steps": 1,
"mean_normalization": true,
"multi_conditioning": true,
"use_old_emphasis_implementation": false,
"with_SDXL": false,
"clip": ["resource-stack-3", 1]
},
"_meta": { "title": "Negative" }
},
"20": {
"class_type": "UpscaleModelLoader",
"inputs": { "model_name": "urn:air:other:upscaler:civitai:147759@164821" },
"_meta": { "title": "Load Upscale Model" }
},
"17": {
"class_type": "LoadImage",
"inputs": {
"image": "https://orchestration.civitai.com/v2/consumer/blobs/5KZ6358TW8CNEGPZKD08NVDB30",
"upload": "image"
},
"_meta": { "title": "Image Load" }
},
"19": {
"class_type": "ImageUpscaleWithModel",
"inputs": { "upscale_model": ["20", 0], "image": ["17", 0] },
"_meta": { "title": "Upscale Image (using Model)" }
},
"23": {
"class_type": "ImageScale",
"inputs": {
"upscale_method": "nearest-exact",
"crop": "disabled",
"width": 1280,
"height": 1856,
"image": ["19", 0]
},
"_meta": { "title": "Upscale Image" }
},
"21": {
"class_type": "VAEEncode",
"inputs": { "pixels": ["23", 0], "vae": ["resource-stack", 2] },
"_meta": { "title": "VAE Encode" }
},
"11": {
"class_type": "KSampler",
"inputs": {
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"seed": 2088370631,
"steps": 47,
"cfg": 6.5,
"denoise": 0.3,
"model": ["resource-stack-3", 0],
"positive": ["6", 0],
"negative": ["7", 0],
"latent_image": ["21", 0]
},
"_meta": { "title": "KSampler" }
},
"13": {
"class_type": "VAEDecode",
"inputs": { "samples": ["11", 0], "vae": ["resource-stack", 2] },
"_meta": { "title": "VAE Decode" }
},
"12": {
"class_type": "SaveImage",
"inputs": { "filename_prefix": "ComfyUI", "images": ["13", 0] },
"_meta": { "title": "Save Image" }
},
"extra": {
"airs": [
"urn:air:other:upscaler:civitai:147759@164821",
"urn:air:sdxl:checkpoint:civitai:827184@1410435",
"urn:air:sdxl:lora:civitai:1107767@1253442",
"urn:air:sdxl:lora:civitai:1342708@1516344",
"urn:air:sdxl:lora:civitai:122359@135867"
]
},
"extraMetadata": "{\u0022prompt\u0022:\u0022masterpiece, best quality, amazing quality, detailed setting, detailed background, 1girl, yunyun (konosuba), nude, red eyes, hair ornament, braid, hair between eyes,low twintails, pink ribbon, bow, hair bow, pussy, frilled skirt, layered skirt, belt, pink thighhighs, (pussy juice), large insertion, vaginal tugging, pussy grip, detailed skin, detailed soles, stretched pussy, feet in stockings, ass, nipples, medium breasts, french kiss, anus, shocked, nervous, penis awe, BREAK Professor\u0027s office, college student, pornographic, 1boy, close eyes, (musscular male, detailed large cock), vaginal sex, college office setting, ass grab, fucking, riding, cowgirl, erotic, side view, deep fucking\u0022,\u0022negativePrompt\u0022:\u0022bad quality,worst quality,worst detail,sketch,censor\u0022,\u0022steps\u0022:47,\u0022cfgScale\u0022:6.5,\u0022sampler\u0022:\u0022euler_ancestral\u0022,\u0022workflowId\u0022:\u0022img2img-hires\u0022,\u0022resources\u0022:[{\u0022modelVersionId\u0022:1410435,\u0022strength\u0022:1},{\u0022modelVersionId\u0022:1410435,\u0022strength\u0022:1},{\u0022modelVersionId\u0022:1253442,\u0022strength\u0022:1},{\u0022modelVersionId\u0022:1516344,\u0022strength\u0022:1},{\u0022modelVersionId\u0022:135867,\u0022strength\u0022:1.55}],\u0022remixOfId\u0022:32140259}"
}

View File

@@ -12,18 +12,10 @@ holographic skin, holofoil glitter, faint, glowing, ethereal, neon hair, glowing
Negative prompt: score_6, score_5, score_4, bad quality, worst quality, worst detail, sketch, censorship, furry, window, headphones,
Steps: 30, Sampler: Euler a, Schedule type: Simple, CFG scale: 7, Seed: 1405717592, Size: 832x1216, Model hash: 1ad6ca7f70, Model: waiNSFWIllustrious_v100, Denoising strength: 0.35, Hires CFG Scale: 5, Hires upscale: 1.3, Hires steps: 20, Hires upscaler: 4x-AnimeSharp, Lora hashes: "ck-shadow-circuit-IL: 88e247aa8c3d, ck-nc-cyberpunk-IL-000011: 935e6755554c, ck-neon-retrowave-IL: edafb9df7da1, ck-yoneyama-mai-IL-000014: 1b9305692a2e", Version: f2.0.1v1.10.1-1.10.1, Diffusion in Low Bits: Automatic (fp16 LoRA)
masterpiece, best quality,high quality, newest, highres,8K,HDR,absurdres, 1girl, solo, steampunk aesthetic, mechanical monocle, long trench coat, leather gloves, brass accessories, intricate clockwork rifle, aiming at viewer, wind-blown scarf, high boots, fingerless gloves, pocket watch, corset, brown and gold color scheme, industrial cityscape, smoke and gears, atmospheric lighting, depth of field, dynamic pose, dramatic composition, detailed background, foreshortening, detailed background, dynamic pose, dynamic composition,dutch angle, detailed backgroud,foreshortening,blurry edges <lora:iLLMythAn1m3Style:1> MythAn1m3
Negative prompt: worst quality, normal quality, anatomical nonsense, bad anatomy,interlocked fingers, extra fingers,watermark,simple background, loli,
Steps: 35, Sampler: DPM++ 2M SDE, Schedule type: Karras, CFG scale: 4, Seed: 3537159932, Size: 1072x1376, Model hash: c364bbdae9, Model: waiNSFWIllustrious_v110, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 24.8.0, Lora hashes: "iLLMythAn1m3Style: d3480076057b", Version: f2.0.1v1.10.1-previous-519-g44eb4ea8, Module 1: sdxl.vae
Masterpiece, best quality, high quality, newest, highres, 8K, HDR, absurdres, 1girl, solo, futuristic warrior, sleek exosuit with glowing energy cores, long braided hair flowing behind, gripping a high-tech bow with an energy arrow drawn, standing on a floating platform overlooking a massive space station, planets and nebulae in the distance, soft glow from distant stars, cinematic depth, foreshortening, dynamic pose, dramatic sci-fi lighting.
Negative prompt: worst quality, normal quality, anatomical nonsense, bad anatomy,interlocked fingers, extra fingers,watermark,simple background, loli,
Steps: 20, Sampler: euler_ancestral_karras, CFG scale: 8.0, Seed: 691121152183439, Model: il\waiNSFWIllustrious_v110.safetensors, Model hash: c3688ee04c, Lora_0 Model name: iLLMythAn1m3Style.safetensors, Lora_0 Model hash: ba7a040786, Lora_0 Strength model: 1.0, Lora_0 Strength clip: 1.0, Hashes: {"model": "c3688ee04c", "lora:iLLMythAn1m3Style": "ba7a040786"}
Masterpiece, best quality, high quality, newest, highres, 8K, HDR, absurdres, 1boy, solo, gothic horror, pale vampire lord in regal, intricately detailed robes, crimson eyes glowing under the dim candlelight of a grand but decayed castle hall, holding a silver goblet filled with an unknown substance, a massive stained-glass window shattered behind him, cold mist rolling in, dramatic lighting, dark yet elegant aesthetic, foreshortening, cinematic perspective.
Negative prompt: worst quality, normal quality, anatomical nonsense, bad anatomy,interlocked fingers, extra fingers,watermark,simple background, loli,
Steps: 20, Sampler: euler_ancestral_karras, CFG scale: 8.0, Seed: 290117945770094, Model: il\waiNSFWIllustrious_v110.safetensors, Model hash: c3688ee04c, Lora_0 Model name: iLLMythAn1m3Style.safetensors, Lora_0 Model hash: ba7a040786, Lora_0 Strength model: 0.6, Lora_0 Strength clip: 0.7000000000000001, Hashes: {"model": "c3688ee04c", "lora:iLLMythAn1m3Style": "ba7a040786"}
bo-exposure, An impressionistic oil painting in the style of J.M.W. Turner, depicting a ghostly ship sailing through a sea of swirling golden mist. The waves crash and dissolve into abstract, fiery strokes of orange and deep indigo, blurring the line between ocean and sky. The ship appears almost ethereal, as if drifting between worlds, lost in the ever-changing tides of memory and myth. The dynamic brushstrokes capture the relentless power of nature and the fleeting essence of time.
Immerse yourself in the enchanting journey, where harmonious transmutation of Bauhaus art unites photographic precision and contemporary illustration, capturing an enthralling blend between vivid abstract nature and urban landscapes. Let your eyes be captivated by a kaleidoscope of rich, deep reds and yellows, entwined with intriguing shades that beckon a somber atmosphere. As your spirit ventures along this haunting path, witness the mysterious, high-angle perspective dominated by scattered clouds granting you a mesmerizing glimpse into the ever-transforming realm of metamorphosing environments. ,<lora:flux/fav/ck-charcoal-drawing-000014.safetensors:1.0:1.0>
Negative prompt:
Steps: 25, Sampler: DPM++ 2M, CFG scale: 3.5, Seed: 1024252061321625, Size: 832x1216, Clip skip: 1, Model hash: , Model: flux_dev, Hashes: {"model": ""}, Version: ComfyUI
Steps: 20, Sampler: Euler, CFG scale: 3.5, Seed: 885491426361006, Size: 832x1216, Model hash: 4610115bb0, Model: flux_dev, Hashes: {"LORA:flux/fav/ck-charcoal-drawing-000014.safetensors": "34d36c17c1", "model": "4610115bb0"}, Version: ComfyUI

3
refs/meta_format.txt Normal file
View File

@@ -0,0 +1,3 @@
In this ethereal masterpiece, metallic sculptures juxtapose effortlessly against a subtle backdrop of misty neutral hues. Exquisite curvatures and geometric shapes converge harmoniously, creating an illuminating realm of polished metallic surfaces. Shimmering copper, gleaming silver, and lustrous gold hues dance in perfect balance, highlighting the intricate play of light and shadow cast upon these celestial forms. A halo of diffused radiance envelops each piece, enhancing their textured depths and metallic brilliance while allowing delicate details to emerge from obscurity. The composition conveys a serene yet mesmerizing atmosphere, as if suspended in a dreamlike limbo between reality and fantasy. The tantalizing interplay of colors within this transcendent realm creates a profound sense of depth and grandeur that invites the viewer into an enchanting voyage through abstract metallic beauty. This captivating artwork evokes emotions of boundless curiosity and reverence reminiscent of the timeless works by artists such as Giorgio de Chirico or Paul Klee, while asserting a unique, modern artistic sensibility. With every observation, a new nuance unfolds, as if a never-ending story waiting to be discovered through the lens of metallic artistry.
Negative prompt:
Steps: 25, Sampler: dpmpp_2m_sgm_uniform, Seed: 471889513588087, Model: Fluxmania V5P.safetensors, Model hash: 8ae0583b06, VAE: ae.sft, VAE hash: afc8e28272, Lora_0 Model name: ArtVador I.safetensors, Lora_0 Model hash: 08f7133a58, Lora_0 Strength model: 0.65, Lora_0 Strength clip: 0.65, Lora_1 Model name: Kaoru Yamada.safetensors, Lora_1 Model hash: d4893f7202, Lora_1 Strength model: 0.75, Lora_1 Strength clip: 0.75, Hashes: {"model": "8ae0583b06", "vae": "afc8e28272", "lora:ArtVador I": "08f7133a58", "lora:Kaoru Yamada": "d4893f7202"}

View File

@@ -99,6 +99,7 @@
width: 100%;
background: var(--lora-surface);
margin-bottom: var(--space-2);
overflow: hidden; /* Ensure metadata panel is contained */
}
.media-wrapper:last-child {
@@ -573,6 +574,59 @@
flex: 2; /* 分配更多空间给base model */
}
/* Base model display and editing styles */
.base-model-display {
display: flex;
align-items: center;
position: relative;
}
.base-model-content {
padding: 2px 4px;
border-radius: var(--border-radius-xs);
border: 1px solid transparent;
color: var(--text-color);
flex: 1;
}
.edit-base-model-btn {
background: transparent;
border: none;
color: var(--text-color);
opacity: 0;
cursor: pointer;
padding: 2px 5px;
border-radius: var(--border-radius-xs);
transition: all 0.2s ease;
margin-left: var(--space-1);
}
.edit-base-model-btn.visible,
.base-model-display:hover .edit-base-model-btn {
opacity: 0.5;
}
.edit-base-model-btn:hover {
opacity: 0.8 !important;
background: rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] .edit-base-model-btn:hover {
background: rgba(255, 255, 255, 0.05);
}
.base-model-selector {
width: 100%;
padding: 3px 5px;
background: var(--bg-color);
border: 1px solid var(--lora-accent);
border-radius: var(--border-radius-xs);
color: var(--text-color);
font-size: 0.9em;
outline: none;
margin-right: var(--space-1);
}
.size-wrapper {
flex: 1;
border-left: 1px solid var(--lora-border);
@@ -1030,4 +1084,172 @@
/* Make sure media wrapper maintains position: relative for absolute positioning of children */
.carousel .media-wrapper {
position: relative;
}
/* Image Metadata Panel Styles */
.image-metadata-panel {
position: absolute;
bottom: 0;
left: 0;
right: 0;
background: var(--bg-color);
border-top: 1px solid var(--border-color);
padding: var(--space-2);
transform: translateY(100%);
transition: transform 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275), opacity 0.25s ease;
z-index: 5;
max-height: 50%; /* Reduced to take less space */
overflow-y: auto;
box-shadow: 0 -2px 8px rgba(0, 0, 0, 0.1);
opacity: 0;
pointer-events: none;
}
/* Show metadata panel only on hover */
.media-wrapper:hover .image-metadata-panel {
transform: translateY(0);
opacity: 0.98;
pointer-events: auto;
}
/* Adjust to dark theme */
[data-theme="dark"] .image-metadata-panel {
background: var(--card-bg);
box-shadow: 0 -2px 8px rgba(0, 0, 0, 0.3);
}
.metadata-content {
display: flex;
flex-direction: column;
gap: 10px;
}
/* Styling for parameters tags */
.params-tags {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-bottom: var(--space-1);
padding-bottom: var(--space-1);
border-bottom: 1px solid var(--lora-border);
}
.param-tag {
display: inline-flex;
align-items: center;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
padding: 2px 6px;
font-size: 0.8em;
line-height: 1.2;
white-space: nowrap;
}
.param-tag .param-name {
font-weight: 600;
color: var(--text-color);
margin-right: 4px;
opacity: 0.8;
}
.param-tag .param-value {
color: var(--lora-accent);
}
/* Special styling for prompt row */
.metadata-row.prompt-row {
flex-direction: column;
padding-top: 0;
}
.metadata-row.prompt-row + .metadata-row.prompt-row {
margin-top: var(--space-2);
}
.metadata-label {
font-weight: 600;
color: var(--text-color);
opacity: 0.8;
font-size: 0.85em;
display: block;
margin-bottom: 4px;
}
.metadata-prompt-wrapper {
position: relative;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
padding: 6px 30px 6px 8px;
margin-top: 2px;
max-height: 80px; /* Reduced from 120px */
overflow-y: auto;
word-break: break-word;
width: 100%;
box-sizing: border-box;
}
.metadata-prompt {
color: var(--text-color);
font-family: monospace;
font-size: 0.85em;
white-space: pre-wrap;
}
.copy-prompt-btn {
position: absolute;
top: 6px;
right: 6px;
background: transparent;
border: none;
color: var(--text-color);
opacity: 0.6;
cursor: pointer;
padding: 3px;
transition: all 0.2s ease;
}
.copy-prompt-btn:hover {
opacity: 1;
color: var(--lora-accent);
}
/* Scrollbar styling for metadata panel */
.image-metadata-panel::-webkit-scrollbar {
width: 6px;
}
.image-metadata-panel::-webkit-scrollbar-track {
background: transparent;
}
.image-metadata-panel::-webkit-scrollbar-thumb {
background-color: var(--border-color);
border-radius: 3px;
}
/* For Firefox */
.image-metadata-panel {
scrollbar-width: thin;
scrollbar-color: var(--border-color) transparent;
}
/* No metadata message styling */
.no-metadata-message {
display: flex;
align-items: center;
justify-content: center;
padding: var(--space-2);
color: var(--text-color);
opacity: 0.7;
text-align: center;
font-style: italic;
gap: 8px;
}
.no-metadata-message i {
font-size: 1.1em;
color: var(--lora-accent);
opacity: 0.8;
}

View File

@@ -129,7 +129,14 @@
justify-content: center;
}
.recipe-preview-container img {
.recipe-preview-container img,
.recipe-preview-container video {
max-width: 100%;
max-height: 100%;
object-fit: contain;
}
.recipe-preview-media {
max-width: 100%;
max-height: 100%;
object-fit: contain;
@@ -340,9 +347,19 @@
border-radius: var(--border-radius-xs);
overflow: hidden;
background: var(--bg-color);
display: flex;
align-items: center;
justify-content: center;
}
.recipe-lora-thumbnail img {
.recipe-lora-thumbnail img,
.recipe-lora-thumbnail video {
width: 100%;
height: 100%;
object-fit: cover;
}
.thumbnail-video {
width: 100%;
height: 100%;
object-fit: cover;

View File

@@ -1,7 +1,7 @@
import { showToast } from '../utils/uiHelpers.js';
import { state } from '../state/index.js';
import { modalManager } from '../managers/ModalManager.js';
import { NSFW_LEVELS } from '../utils/constants.js';
import { NSFW_LEVELS, BASE_MODELS } from '../utils/constants.js';
export function showLoraModal(lora) {
const escapedWords = lora.civitai?.trainedWords?.length ?
@@ -43,7 +43,12 @@ export function showLoraModal(lora) {
<div class="info-item base-size">
<div class="base-wrapper">
<label>Base Model</label>
<span>${lora.base_model || 'N/A'}</span>
<div class="base-model-display">
<span class="base-model-content">${lora.base_model || 'N/A'}</span>
<button class="edit-base-model-btn" title="Edit base model">
<i class="fas fa-pencil-alt"></i>
</button>
</div>
</div>
<div class="size-wrapper">
<label>Size</label>
@@ -124,6 +129,7 @@ export function showLoraModal(lora) {
setupTagTooltip();
setupTriggerWordsEditMode();
setupModelNameEditing();
setupBaseModelEditing();
// If we have a model ID but no description, fetch it
if (lora.civitai?.modelId && !lora.modelDescription) {
@@ -202,61 +208,165 @@ function renderShowcaseContent(images) {
nsfwText = "R-rated Content";
}
if (img.type === 'video') {
return `
<div class="media-wrapper ${shouldBlur ? 'nsfw-media-wrapper' : ''}" style="padding-bottom: ${heightPercent}%">
${shouldBlur ? `
<button class="toggle-blur-btn showcase-toggle-btn" title="Toggle blur">
<i class="fas fa-eye"></i>
</button>
` : ''}
<video controls autoplay muted loop crossorigin="anonymous"
referrerpolicy="no-referrer" data-src="${img.url}"
class="lazy ${shouldBlur ? 'blurred' : ''}">
<source data-src="${img.url}" type="video/mp4">
Your browser does not support video playback
</video>
${shouldBlur ? `
<div class="nsfw-overlay">
<div class="nsfw-warning">
<p>${nsfwText}</p>
<button class="show-content-btn">Show</button>
</div>
</div>
` : ''}
</div>
`;
}
return `
<div class="media-wrapper ${shouldBlur ? 'nsfw-media-wrapper' : ''}" style="padding-bottom: ${heightPercent}%">
${shouldBlur ? `
<button class="toggle-blur-btn showcase-toggle-btn" title="Toggle blur">
<i class="fas fa-eye"></i>
</button>
` : ''}
<img data-src="${img.url}"
alt="Preview"
crossorigin="anonymous"
referrerpolicy="no-referrer"
width="${img.width}"
height="${img.height}"
class="lazy ${shouldBlur ? 'blurred' : ''}">
${shouldBlur ? `
<div class="nsfw-overlay">
<div class="nsfw-warning">
<p>${nsfwText}</p>
<button class="show-content-btn">Show</button>
// Extract metadata from the image
const meta = img.meta || {};
const prompt = meta.prompt || '';
const negativePrompt = meta.negative_prompt || meta.negativePrompt || '';
const size = meta.Size || `${img.width}x${img.height}`;
const seed = meta.seed || '';
const model = meta.Model || '';
const steps = meta.steps || '';
const sampler = meta.sampler || '';
const cfgScale = meta.cfgScale || '';
const clipSkip = meta.clipSkip || '';
// Check if we have any meaningful generation parameters
const hasParams = seed || model || steps || sampler || cfgScale || clipSkip;
const hasPrompts = prompt || negativePrompt;
// If no metadata available, show a message
if (!hasParams && !hasPrompts) {
const metadataPanel = `
<div class="image-metadata-panel">
<div class="metadata-content">
<div class="no-metadata-message">
<i class="fas fa-info-circle"></i>
<span>No generation parameters available</span>
</div>
</div>
` : ''}
</div>
`;
if (img.type === 'video') {
return generateVideoWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel);
}
return generateImageWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel);
}
// Create a data attribute with the prompt for copying instead of trying to handle it in the onclick
// This avoids issues with quotes and special characters
const promptIndex = Math.random().toString(36).substring(2, 15);
const negPromptIndex = Math.random().toString(36).substring(2, 15);
// Create parameter tags HTML
const paramTags = `
<div class="params-tags">
${size ? `<div class="param-tag"><span class="param-name">Size:</span><span class="param-value">${size}</span></div>` : ''}
${seed ? `<div class="param-tag"><span class="param-name">Seed:</span><span class="param-value">${seed}</span></div>` : ''}
${model ? `<div class="param-tag"><span class="param-name">Model:</span><span class="param-value">${model}</span></div>` : ''}
${steps ? `<div class="param-tag"><span class="param-name">Steps:</span><span class="param-value">${steps}</span></div>` : ''}
${sampler ? `<div class="param-tag"><span class="param-name">Sampler:</span><span class="param-value">${sampler}</span></div>` : ''}
${cfgScale ? `<div class="param-tag"><span class="param-name">CFG:</span><span class="param-value">${cfgScale}</span></div>` : ''}
${clipSkip ? `<div class="param-tag"><span class="param-name">Clip Skip:</span><span class="param-value">${clipSkip}</span></div>` : ''}
</div>
`;
// Metadata panel HTML
const metadataPanel = `
<div class="image-metadata-panel">
<div class="metadata-content">
${hasParams ? paramTags : ''}
${!hasParams && !hasPrompts ? `
<div class="no-metadata-message">
<i class="fas fa-info-circle"></i>
<span>No generation parameters available</span>
</div>
` : ''}
${prompt ? `
<div class="metadata-row prompt-row">
<span class="metadata-label">Prompt:</span>
<div class="metadata-prompt-wrapper">
<div class="metadata-prompt">${prompt}</div>
<button class="copy-prompt-btn" data-prompt-index="${promptIndex}">
<i class="fas fa-copy"></i>
</button>
</div>
</div>
<div class="hidden-prompt" id="prompt-${promptIndex}" style="display:none;">${prompt}</div>
` : ''}
${negativePrompt ? `
<div class="metadata-row prompt-row">
<span class="metadata-label">Negative Prompt:</span>
<div class="metadata-prompt-wrapper">
<div class="metadata-prompt">${negativePrompt}</div>
<button class="copy-prompt-btn" data-prompt-index="${negPromptIndex}">
<i class="fas fa-copy"></i>
</button>
</div>
</div>
<div class="hidden-prompt" id="prompt-${negPromptIndex}" style="display:none;">${negativePrompt}</div>
` : ''}
</div>
</div>
`;
if (img.type === 'video') {
return generateVideoWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel);
}
return generateImageWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel);
}).join('')}
</div>
</div>
`;
}
// Helper function to generate video wrapper HTML
function generateVideoWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel) {
return `
<div class="media-wrapper ${shouldBlur ? 'nsfw-media-wrapper' : ''}" style="padding-bottom: ${heightPercent}%">
${shouldBlur ? `
<button class="toggle-blur-btn showcase-toggle-btn" title="Toggle blur">
<i class="fas fa-eye"></i>
</button>
` : ''}
<video controls autoplay muted loop crossorigin="anonymous"
referrerpolicy="no-referrer" data-src="${img.url}"
class="lazy ${shouldBlur ? 'blurred' : ''}">
<source data-src="${img.url}" type="video/mp4">
Your browser does not support video playback
</video>
${shouldBlur ? `
<div class="nsfw-overlay">
<div class="nsfw-warning">
<p>${nsfwText}</p>
<button class="show-content-btn">Show</button>
</div>
</div>
` : ''}
${metadataPanel}
</div>
`;
}
// Helper function to generate image wrapper HTML
function generateImageWrapper(img, heightPercent, shouldBlur, nsfwText, metadataPanel) {
return `
<div class="media-wrapper ${shouldBlur ? 'nsfw-media-wrapper' : ''}" style="padding-bottom: ${heightPercent}%">
${shouldBlur ? `
<button class="toggle-blur-btn showcase-toggle-btn" title="Toggle blur">
<i class="fas fa-eye"></i>
</button>
` : ''}
<img data-src="${img.url}"
alt="Preview"
crossorigin="anonymous"
referrerpolicy="no-referrer"
width="${img.width}"
height="${img.height}"
class="lazy ${shouldBlur ? 'blurred' : ''}">
${shouldBlur ? `
<div class="nsfw-overlay">
<div class="nsfw-warning">
<p>${nsfwText}</p>
<button class="show-content-btn">Show</button>
</div>
</div>
` : ''}
${metadataPanel}
</div>
`;
}
// New function to handle tab switching
function setupTabSwitching() {
const tabButtons = document.querySelectorAll('.showcase-tabs .tab-btn');
@@ -620,13 +730,74 @@ export function toggleShowcase(element) {
// Initialize NSFW content blur toggle handlers
initNsfwBlurHandlers(carousel);
// Initialize metadata panel interaction handlers
initMetadataPanelHandlers(carousel);
} else {
const count = carousel.querySelectorAll('.media-wrapper').length;
indicator.textContent = `Scroll or click to show ${count} examples`;
icon.classList.replace('fa-chevron-up', 'fa-chevron-down');
// Make sure any open metadata panels get closed
const carouselContainer = carousel.querySelector('.carousel-container');
if (carouselContainer) {
carouselContainer.style.height = '0';
setTimeout(() => {
carouselContainer.style.height = '';
}, 300);
}
}
}
// Function to initialize metadata panel interactions
function initMetadataPanelHandlers(container) {
// Find all media wrappers
const mediaWrappers = container.querySelectorAll('.media-wrapper');
mediaWrappers.forEach(wrapper => {
// Get the metadata panel
const metadataPanel = wrapper.querySelector('.image-metadata-panel');
if (!metadataPanel) return;
// Prevent events from the metadata panel from bubbling
metadataPanel.addEventListener('click', (e) => {
e.stopPropagation();
});
// Handle copy prompt button clicks
const copyBtns = metadataPanel.querySelectorAll('.copy-prompt-btn');
copyBtns.forEach(copyBtn => {
const promptIndex = copyBtn.dataset.promptIndex;
const promptElement = wrapper.querySelector(`#prompt-${promptIndex}`);
copyBtn.addEventListener('click', async (e) => {
e.stopPropagation(); // Prevent bubbling
if (!promptElement) return;
try {
await navigator.clipboard.writeText(promptElement.textContent);
showToast('Prompt copied to clipboard', 'success');
} catch (err) {
console.error('Copy failed:', err);
showToast('Copy failed', 'error');
}
});
});
// Prevent scrolling in the metadata panel from scrolling the whole modal
metadataPanel.addEventListener('wheel', (e) => {
const isAtTop = metadataPanel.scrollTop === 0;
const isAtBottom = metadataPanel.scrollHeight - metadataPanel.scrollTop === metadataPanel.clientHeight;
// Only prevent default if scrolling would cause the panel to scroll
if ((e.deltaY < 0 && !isAtTop) || (e.deltaY > 0 && !isAtBottom)) {
e.stopPropagation();
}
}, { passive: true });
});
}
// New function to initialize blur toggle handlers for showcase images/videos
function initNsfwBlurHandlers(container) {
// Handle toggle blur buttons
@@ -1225,3 +1396,169 @@ function setupModelNameEditing() {
}
});
}
// Add save model base model function
window.saveBaseModel = async function(filePath, originalValue) {
const baseModelElement = document.querySelector('.base-model-content');
const newBaseModel = baseModelElement.textContent.trim();
// Only save if the value has actually changed
if (newBaseModel === originalValue) {
return; // No change, no need to save
}
try {
await saveModelMetadata(filePath, { base_model: newBaseModel });
// Update the corresponding lora card's dataset
const loraCard = document.querySelector(`.lora-card[data-filepath="${filePath}"]`);
if (loraCard) {
loraCard.dataset.base_model = newBaseModel;
}
showToast('Base model updated successfully', 'success');
} catch (error) {
showToast('Failed to update base model', 'error');
}
};
// New function to handle base model editing
function setupBaseModelEditing() {
const baseModelContent = document.querySelector('.base-model-content');
const editBtn = document.querySelector('.edit-base-model-btn');
if (!baseModelContent || !editBtn) return;
// Show edit button on hover
const baseModelDisplay = document.querySelector('.base-model-display');
baseModelDisplay.addEventListener('mouseenter', () => {
editBtn.classList.add('visible');
});
baseModelDisplay.addEventListener('mouseleave', () => {
if (!baseModelDisplay.classList.contains('editing')) {
editBtn.classList.remove('visible');
}
});
// Handle edit button click
editBtn.addEventListener('click', () => {
baseModelDisplay.classList.add('editing');
// Store the original value to check for changes later
const originalValue = baseModelContent.textContent.trim();
// Create dropdown selector to replace the base model content
const currentValue = originalValue;
const dropdown = document.createElement('select');
dropdown.className = 'base-model-selector';
// Flag to track if a change was made
let valueChanged = false;
// Add options from BASE_MODELS constants
const baseModelCategories = {
'Stable Diffusion 1.x': [BASE_MODELS.SD_1_4, BASE_MODELS.SD_1_5, BASE_MODELS.SD_1_5_LCM, BASE_MODELS.SD_1_5_HYPER],
'Stable Diffusion 2.x': [BASE_MODELS.SD_2_0, BASE_MODELS.SD_2_1],
'Stable Diffusion 3.x': [BASE_MODELS.SD_3, BASE_MODELS.SD_3_5, BASE_MODELS.SD_3_5_MEDIUM, BASE_MODELS.SD_3_5_LARGE, BASE_MODELS.SD_3_5_LARGE_TURBO],
'SDXL': [BASE_MODELS.SDXL, BASE_MODELS.SDXL_LIGHTNING, BASE_MODELS.SDXL_HYPER],
'Video Models': [BASE_MODELS.SVD, BASE_MODELS.WAN_VIDEO, BASE_MODELS.HUNYUAN_VIDEO],
'Other Models': [
BASE_MODELS.FLUX_1_D, BASE_MODELS.FLUX_1_S, BASE_MODELS.AURAFLOW,
BASE_MODELS.PIXART_A, BASE_MODELS.PIXART_E, BASE_MODELS.HUNYUAN_1,
BASE_MODELS.LUMINA, BASE_MODELS.KOLORS, BASE_MODELS.NOOBAI,
BASE_MODELS.ILLUSTRIOUS, BASE_MODELS.PONY, BASE_MODELS.UNKNOWN
]
};
// Create option groups for better organization
Object.entries(baseModelCategories).forEach(([category, models]) => {
const group = document.createElement('optgroup');
group.label = category;
models.forEach(model => {
const option = document.createElement('option');
option.value = model;
option.textContent = model;
option.selected = model === currentValue;
group.appendChild(option);
});
dropdown.appendChild(group);
});
// Replace content with dropdown
baseModelContent.style.display = 'none';
baseModelDisplay.insertBefore(dropdown, editBtn);
// Hide edit button during editing
editBtn.style.display = 'none';
// Focus the dropdown
dropdown.focus();
// Handle dropdown change
dropdown.addEventListener('change', function() {
const selectedModel = this.value;
baseModelContent.textContent = selectedModel;
// Mark that a change was made if the value differs from original
if (selectedModel !== originalValue) {
valueChanged = true;
} else {
valueChanged = false;
}
});
// Function to save changes and exit edit mode
const saveAndExit = function() {
// Check if dropdown still exists and remove it
if (dropdown && dropdown.parentNode === baseModelDisplay) {
baseModelDisplay.removeChild(dropdown);
}
// Show the content and edit button
baseModelContent.style.display = '';
editBtn.style.display = '';
// Remove editing class
baseModelDisplay.classList.remove('editing');
// Only save if the value has actually changed
if (valueChanged || baseModelContent.textContent.trim() !== originalValue) {
// Get file path for saving
const filePath = document.querySelector('#loraModal .modal-content')
.querySelector('.file-path').textContent +
document.querySelector('#loraModal .modal-content')
.querySelector('#file-name').textContent + '.safetensors';
// Save the changes, passing the original value for comparison
saveBaseModel(filePath, originalValue);
}
// Remove this event listener
document.removeEventListener('click', outsideClickHandler);
};
// Handle outside clicks to save and exit
const outsideClickHandler = function(e) {
// If click is outside the dropdown and base model display
if (!baseModelDisplay.contains(e.target)) {
saveAndExit();
}
};
// Add delayed event listener for outside clicks
setTimeout(() => {
document.addEventListener('click', outsideClickHandler);
}, 0);
// Also handle dropdown blur event
dropdown.addEventListener('blur', function(e) {
// Only save if the related target is not the edit button or inside the baseModelDisplay
if (!baseModelDisplay.contains(e.relatedTarget)) {
saveAndExit();
}
});
});
}

View File

@@ -107,8 +107,33 @@ class RecipeModal {
const imageUrl = recipe.file_url ||
(recipe.file_path ? `/loras_static/root1/preview/${recipe.file_path.split('/').pop()}` :
'/loras_static/images/no-preview.png');
modalImage.src = imageUrl;
modalImage.alt = recipe.title || 'Recipe Preview';
// Check if the file is a video (mp4)
const isVideo = imageUrl.toLowerCase().endsWith('.mp4');
// Replace the image element with appropriate media element
const mediaContainer = modalImage.parentElement;
mediaContainer.innerHTML = '';
if (isVideo) {
const videoElement = document.createElement('video');
videoElement.id = 'recipeModalVideo';
videoElement.src = imageUrl;
videoElement.controls = true;
videoElement.autoplay = false;
videoElement.loop = true;
videoElement.muted = true;
videoElement.className = 'recipe-preview-media';
videoElement.alt = recipe.title || 'Recipe Preview';
mediaContainer.appendChild(videoElement);
} else {
const imgElement = document.createElement('img');
imgElement.id = 'recipeModalImage';
imgElement.src = imageUrl;
imgElement.className = 'recipe-preview-media';
imgElement.alt = recipe.title || 'Recipe Preview';
mediaContainer.appendChild(imgElement);
}
}
// Set generation parameters
@@ -212,10 +237,18 @@ class RecipeModal {
<i class="fas fa-exclamation-triangle"></i> Not in Library
</div>`;
// Check if preview is a video
const isPreviewVideo = lora.preview_url && lora.preview_url.toLowerCase().endsWith('.mp4');
const previewMedia = isPreviewVideo ?
`<video class="thumbnail-video" autoplay loop muted playsinline>
<source src="${lora.preview_url}" type="video/mp4">
</video>` :
`<img src="${lora.preview_url || '/loras_static/images/no-preview.png'}" alt="LoRA preview">`;
return `
<div class="recipe-lora-item ${existsLocally ? 'exists-locally' : 'missing-locally'}">
<div class="recipe-lora-thumbnail">
<img src="${lora.preview_url || '/loras_static/images/no-preview.png'}" alt="LoRA preview">
${previewMedia}
</div>
<div class="recipe-lora-content">
<div class="recipe-lora-header">

View File

@@ -116,6 +116,7 @@ export class ImportManager {
this.recipeName = '';
this.recipeTags = [];
this.missingLoras = [];
this.downloadableLoRAs = [];
// Reset import mode to upload
this.importMode = 'upload';
@@ -1213,4 +1214,4 @@ export class ImportManager {
return true;
}
}
}

View File

@@ -16,8 +16,8 @@
<div class="modal-body">
<!-- Top Section: Preview and Generation Parameters -->
<div class="recipe-top-section">
<div class="recipe-preview-container">
<img id="recipeModalImage" src="" alt="Recipe Preview">
<div class="recipe-preview-container" id="recipePreviewContainer">
<img id="recipeModalImage" src="" alt="Recipe Preview" class="recipe-preview-media">
</div>
<div class="info-section recipe-gen-params">

View File

@@ -35,6 +35,10 @@ app.registerExtension({
// Enable widget serialization
node.serialize_widgets = true;
node.addInput('clip', 'CLIP', {
"shape": 7
});
node.addInput("lora_stack", 'LORA_STACK', {
"shape": 7 // 7 is the shape of the optional input
});