From e6190f90612c6e26e578fd5ff64c690bbfbfeb71 Mon Sep 17 00:00:00 2001 From: justumen Date: Thu, 12 Sep 2024 14:52:07 +0200 Subject: [PATCH] add cloud service --- README.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 2dc3358..a9190bb 100644 --- a/README.md +++ b/README.md @@ -4,17 +4,34 @@ If you want to use my nodes and comfyui in the cloud, I'm managing an optimized template on runpod : Template name : `bjornulf-comfyui-allin-workspace`, can be operational in ~3 minutes. (Depending on your pod) -⚠️ You need to open a terminal in browser (After clicking on `connect` from your pod) and use this to launch ComfyUI : `cd /workspace/ComfyUI && python main.py --listen 0.0.0.0 --port 3000` (Much better to control it with a terminal, check logs, etc...) +⚠️ You need to open a terminal in browser (After clicking on `connect` from your pod) and use this to launch ComfyUI manually : `cd /workspace/ComfyUI && python main.py --listen 0.0.0.0 --port 3000` (Much better to control it with a terminal, check logs, etc...) After that you can just click on the `Connect to port 3000` button. +For file manager, you can use the included `JupyterLab` on port 8888. If you have any issues with it, please let me know. You need to create and select a network volume, size is up to you, i have 50Gb Storage because i use cloud only for Flux or lora training on a 4090. (~0.7$/hour) +For me : I have a 4070 super with 12Gb and flux fp8 simple wokflow is about ~40 seconds. With a 4090 in the cloud I can even run fp16 in ~12 seconds. It will manage everything in Runpod network storage (`/workspace/ComfyUI`), so you can stop and start the cloud GPU without losing anything, change GPU or whatever. Zone : I recommend `EU-RO-1`, but up to you. Top-up your Runpod account with minimum 10$ to start. ⚠️ Warning, you will pay by the minute, so not recommended for testing or learning comfyui. Do that locally !!! Run cloud GPU only when you already have your workflow ready to run. Advice : take a cheap GPU for testing, downloading models or so one. +To download checkpoint or anything else, you need to use the terminal, here is all you need for flux as an example : +For downloading from Huggingface (get token here ), Here is example for everything you need for flux : +``` +huggingface-cli login --token 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 +huggingface-cli download black-forest-labs/FLUX.1-dev flux1-dev.safetensors --local-dir /workspace/ComfyUI/models/unet +huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --local-dir /workspace/ComfyUI/models/clip +huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip +huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae +``` + +For downloading from civitai (get token here ), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL : +``` +CIVITAI="8b275fada679ba5812b3da2bf35016f6" +wget --content-disposition -P /workspace/ComfyUI/models/checkpoints "https://civitai.com/api/download/models/272376?type=Model&format=SafeTensor&size=pruned&fp=fp16&token=$CIVITAI" +``` # Dependencies