Comfyui arguments reddit

Comfyui arguments reddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Both character and environment. SD1. Belittling their efforts will get you banned. For example, this is mine: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. Data to create a command line tool to improve the ergonomics of using ComfyUI. I get 1. 2 seconds, with TensorRT. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. What worked for me was to add a simple command line argument to the file: `--listen 0. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. This narrows the problem down to GPU/Pytorch packages. Workflows are much more easily reproducible and versionable. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. I've ensured both CUDA 11. that did not I am using ComfyUI with its default settings. I tested with different SDXL models and tested without the Lora but the result is always the same. For example, this is mine: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. 0` The final line in the run_nvidia_gpu. and any other arguments you want to add. You might still have to add the occasional command line argument for extensions or something. We would like to show you a description here but the site won’t allow us. A. 1 are updated and used by ComfyUI. py --cpu ) but of course not ideal. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. sh file, or in the command line, you can just add the --lowvram option straight after main. Update ComfyUI and all your custom nodes, and make sure you are using the correct models. Invoke just released 3. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Lt. 1 or not. exe -m pip install [dependency] I have read the section of the Github page on installing ComfyUI. (Same image takes 5. I originally created it to learn more about the underlying code base powering ComfyUI, but after building it I think it could be useful for anyone in the community who is more comfortable coding than using GUIs. 53 it/s for SDXL and approximately 4. Has anyone tried or is still trying? I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. Open the . 21K subscribers in the comfyui community. 55 it/s for SD1. ComfyUI is also trivial to extend with custom nodes. Please share your tips, tricks, and workflows for using this software to create your AI art. ) I haven't managed to reproduce this process in Comfyui yet. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. 37 votes, 11 comments. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. Inpainting (with auto-generated transparency masks). py", whether that be in a . I did a clean install right now and it works perfectly. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. g. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. py --windows-standalone-build --normalvram --listen 0. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . On Linux with the latest ComfyUI I am getting 3. Thanks in advanced for any information. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. I used to do it manually, with Symlinks and command arguments and the like. exe -s ComfyUI\main. Basic img2img. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Please keep posted images SFW. Using ComfyUI with my GTX 1650 is simply way better than using Automatic1111. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. 1’s 200,000 GPU hours. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. 5, SD2. It's very early stage but I am curious what folks think / excited to update it over time! The goal is to a) make it easy for semi-casual users (e. Every time you run the . Welcome to the unofficial ComfyUI subreddit. json got prompt… Welcome to the unofficial ComfyUI subreddit. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Welcome to the unofficial ComfyUI subreddit. 0 with refiner. However, I kept getting a black image. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. For a portable install, launch terminal in comfyUI folder and use . Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. Only if you want it early. bat file set CUDA_VISIBLE_DEVICES=1. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. bat file, . Please share your tips, tricks, and… Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced… These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. But where do I begin, anyone know any good tutorials for a lora training beginner. . py --normalvram. /main. ) and I am trying out using SDXL in ComfyUI. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). ==[Update]== Launching in CPU mode is successful ( python main. 9s/it with 1111. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. Installation¶ Welcome to the unofficial ComfyUI subreddit. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. bat looks like this: `. \python_embeded\python. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 5 (+ Controlnet,PatchModel. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. Options:--install-completion: Install completion for the current shell. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. But there's an even easier way now: StabilityMatrix It's the same thing, but they've done most of the work for you. r/synthdiy • We're going live with a workshop in an hour. 5 while creating a 896x1152 image via the Euler-A sampler. For the latest daily release, launch ComfyUI with this command line argument: --front-end-version Comfy-Org/ComfyUI_frontend@latest For a specific version, replace latest with the desired version number: Aug 8, 2023 · Wherever you are running the "main. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. bat file, it will load the arguments. I stand corrected. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Some main features are: Automatically install ComfyUI dependencies. Hi r/comfyui, we worked with Dr. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. And above all, BE NICE. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. For some reason, it broke my ComfyUI when I did it earlier. That helped the speed a bit FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. py --lowvram --auto-launch. nms' with arguments from seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can 3. I think a function must always have "self" as its first argument. 6s/it with comfy as opposed to 4. bat file. I only have 4gb vram so I'm just trying get my settings optimized. --show-completion: Show completion for the current shell, to copy it or customize the installation. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. It appears some other AMD GPU users have similar unsolved issues: I just released an open source ComfyUI extension that can translate any native ComfyUI workflow into executable Python code. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. Comfyui is much better suited for studio use than other GUIs available now. Here are some examples I did generate using comfyUI + SDXL 1. Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. bat file with notepad, make your changes, then save it. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. Updating to the correct latest version of PyTorch is what is needed. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. It said follow the instructions for manually installing for Windows. I don't know what Magnific AI uses. Supports: Basic txt2img. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. Launch arguments that I don't know about for ComfyUI, or, Some config stuff I've missed with ComfyUI. Has anyone managed to implement Krea. But inpainting in Comfy is still terrible, not sure it I simply didn't managed to config it right, but when inpainting faces in 1111 I can make it inpaint in a specific resolution, so I do faces at 512x512. py, eg: python3 . Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. A lot of people are just discovering this technology, and want to show off what they created. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. 0. I think for me at least for now with my current laptop using comfyUI is the way to go. Also, if this is new and exciting to you, feel free to post On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 8 and PyTorch 2. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. thfk xisvr iduvdy icfken onc sgmj pyn lzfrzzz isk mbf