rephrasing

This commit is contained in:
justumen
2024-09-13 15:41:03 +02:00
parent ac04969679
commit e0abed117e
4 changed files with 174 additions and 13 deletions

View File

@@ -16,9 +16,9 @@ Top-up your Runpod account with minimum 10$ to start.
⚠️ Warning, you will pay by the minute, so not recommended for testing or learning comfyui. Do that locally !!!
Run cloud GPU only when you already have your workflow ready to run.
Advice : take a cheap GPU for testing, downloading models or settings things up.
To download checkpoint or anything else, you need to use the terminal, here is all you need for flux as an example :
For downloading from Huggingface (get token here <https://huggingface.co/settings/tokens>), Here is example for everything you need for flux :
To download checkpoint or anything else, you need to use the terminal.
For downloading from Huggingface (get token here <https://huggingface.co/settings/tokens>).
Here is example for everything you need for flux dev :
```
huggingface-cli login --token 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
huggingface-cli download black-forest-labs/FLUX.1-dev flux1-dev.safetensors --local-dir /workspace/ComfyUI/models/unet
@@ -26,6 +26,7 @@ huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip
huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae
```
To use Flux you can just drag and drop in your browser the .json from my github repo : `workflows/FLUX_dev_troll.json`, direct link : <https://github.com/justUmen/ComfyUI-BjornulfNodes/blob/main/workflows/FLUX_dev_troll.json>.
For downloading from civitai (get token here <https://civitai.com/user/account>), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL :
```
@@ -65,8 +66,8 @@ wget --content-disposition -P /workspace/ComfyUI/models/checkpoints "https://civ
- **v0.20**: Changes for lobechat save image : include the code of free VRAM hack + ignore missing images filenames
- **v0.21**: Add a new write text node that also display the text in the comfyui console (good for debugging)
- **v0.22**: Allow write text node to use random selection like this {hood|helmet} will randomly choose between hood or helmet.
- **v0.23**: And a new node: Pause, resume or stop workflow.
- **v0.24**: And a new node: Pause, select input, pick one.
- **v0.23**: Add a new node: Pause, resume or stop workflow.
- **v0.24**: Add a new node: Pause, select input, pick one.
# 📝 Nodes descriptions
@@ -103,7 +104,7 @@ General-purpose loop node.
![Loop Texts](screenshots/loop_texts.png)
**Description:**
Cycle through a list of text inputs. Great for creating dynamic text-based presentations.
Cycle through a list of text inputs.
## 7 - ♻ Loop Integer
![Loop Integer](screenshots/loop_integer.png)
@@ -185,9 +186,6 @@ Will generate detailed text based of what you give it.
I recommend using `mistral-nemo` if you can run it, but it's up to you. (Might have to tweak the system prompt a bit)
⚠️ Warning : Having an ollama node that will run for each generation might be a bit heavy on your VRAM. Think about if you really need it or not.
**Description:**
Straight forward node to write and show text.
## 20 - 📹 Video Ping Pong
![Video Ping Pong](screenshots/video_pingpong.png)
@@ -296,7 +294,7 @@ Cut an image from a mask.
Use my TTS server to generate speech from text.
❗ Of course you need to use my TTS server : <https://github.com/justUmen/Bjornulf_XTTS>
After having that installed, you NEED to create a link in my Comfyui custom node folder called `speakers` : `ComfyUI/custom_nodes/Bjornulf_custom_nodes/speakers`
That link must must be a link to the folder where you store the voice samples you use for my TTS, like `default.wav`.
That link must be a link to the folder where you store the voice samples you use for my TTS, like `default.wav`.
If my TTS server is running on port 8020 (You can test in browser with the link <http://localhost:8020/tts_stream?language=en&speaker_wav=default&text=Hello>) and voice samples are good, you can use this node to generate speech from text.
### 32 - 🧑📝 Character Description Generator
@@ -308,6 +306,7 @@ Generate a character description based on a json file in the folder `characters`
Make your own json file with your own characters, and use this node to generate a description.
❗ For now it's very basic node, a lot of things are going to be added and changed !!!
Some details are unusable for some checkpoints, very much a work in progress, the json structure isn't set in stone either.
Some characters are included.
### 33 - ♻ Loop (All Lines from input 🔗 combine by lines)
@@ -336,7 +335,7 @@ Just connect this node with your workflow, it takes an image as input and return
![pause resume stop](screenshots/pause2.png)
![pause resume stop](screenshots/pause3.png)
**Description:**
**Description:**
Automatically pause the workflow, and rings a bell when it does. (play the audio `bell.m4a` file provided)
You can then manually resume or stop the workflow by clicking on the node's buttons.
I do that let's say for example if I have a very long upscaling process, I can check if the input is good before continuing. Sometimes I might stop the workflow and restart it with another seed.
@@ -346,7 +345,7 @@ You can connect any type of node to the pause node, above is an example with tex
![pick input](screenshots/pick.png)
**Description:**
**Description:**
Automatically pause the workflow, and rings a bell when it does. (play the audio `bell.m4a` file provided)
You can then manually select the input you want to use, and resume the workflow with it.
You can connect this node to anything you want, above is an example with IMAGE. But you can pick whatever you want, in the node `input = output`.

View File

@@ -1,6 +1,6 @@
[project]
name = "bjornulf_custom_nodes"
description = "Nodes: Ollama, Text to Speech, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images (Background + Overlay alpha), Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
version = "0.24"
license = {file = "LICENSE"}

View File

@@ -0,0 +1,162 @@
{
"5": {
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
},
"class_type": "EmptyLatentImage",
"_meta": {
"title": "Empty Latent Image"
}
},
"6": {
"inputs": {
"text": "photography of a troll in a swamp,\nred witch hat,\nblue pants,\nyellow shirt,\nblack hair,\ngreen skin,\nwearing a watch,\nsnake behind him,\nskull on belt buckle",
"clip": [
"26:0",
0
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"8": {
"inputs": {
"samples": [
"27:4",
0
],
"vae": [
"26:2",
0
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage",
"_meta": {
"title": "Save Image"
}
},
"26:0": {
"inputs": {
"clip_name1": "t5xxl_fp16.safetensors",
"clip_name2": "clip_l.safetensors",
"type": "flux"
},
"class_type": "DualCLIPLoader",
"_meta": {
"title": "DualCLIPLoader"
}
},
"26:1": {
"inputs": {
"unet_name": "flux1-dev.safetensors",
"weight_dtype": "default"
},
"class_type": "UNETLoader",
"_meta": {
"title": "Load Diffusion Model"
}
},
"26:2": {
"inputs": {
"vae_name": "ae.safetensors"
},
"class_type": "VAELoader",
"_meta": {
"title": "Load VAE"
}
},
"27:0": {
"inputs": {
"sampler_name": "euler"
},
"class_type": "KSamplerSelect",
"_meta": {
"title": "KSamplerSelect"
}
},
"27:1": {
"inputs": {
"noise_seed": 605276574941494
},
"class_type": "RandomNoise",
"_meta": {
"title": "RandomNoise"
}
},
"27:2": {
"inputs": {
"scheduler": "simple",
"steps": 20,
"denoise": 1,
"model": [
"26:1",
0
]
},
"class_type": "BasicScheduler",
"_meta": {
"title": "BasicScheduler"
}
},
"27:3": {
"inputs": {
"model": [
"26:1",
0
],
"conditioning": [
"6",
0
]
},
"class_type": "BasicGuider",
"_meta": {
"title": "BasicGuider"
}
},
"27:4": {
"inputs": {
"noise": [
"27:1",
0
],
"guider": [
"27:3",
0
],
"sampler": [
"27:0",
0
],
"sigmas": [
"27:2",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "SamplerCustomAdvanced",
"_meta": {
"title": "SamplerCustomAdvanced"
}
}
}