Comfyui reddit
Comfyui reddit. I use a cheap, expendable Chromebook (to access Google Colab) for my travelling ComfyUI needs (with a mouse). From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Different artist can do do different things, so pick an artist that suits the image you want. - comfyanonymous/ComfyUI. this are some of my Welcome to the unofficial ComfyUI subreddit. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Assuming you had a KSampler named KSampler, you would do this: %KSampler. I've been wondering the same since I saw a tutorial on using just the model upscaler vs the ultimate upscaler. If you have multiple KSamplers in your workflow, you need to find the S&R name and use that for the node_name (see the link, its in the right click menu when you right click a node) Welcome to the unofficial ComfyUI subreddit. Hello! Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before meπ«‘ππ«‘ππ«‘ππ«‘π Are people using… Welcome to /r/SkyrimMods! We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. Thanks! Welcome to the unofficial ComfyUI subreddit. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. 0&modelType=LORA&sortBy=models_v8&query=details. Started with A1111, but now solely ComfyUI. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Please share your tips, tricks, and… Comfyui is much better suited for studio use than other GUIs available now. This pack includes a node called "power prompt". Hi Reddit! I just shipped some new custom nodes that let you easily use the new MagicAnimate model inside ComfyUI!… Welcome to the unofficial ComfyUI subreddit. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. magnific is a really clever workfow too be honest, it is not that trivial to add detail and not to change the image too much as op said. 22, the latest one available). My questions weren't so much that you should or shouldn't include it, BUT I am confused by the support/lack thereof for it via any method (core or custom node) and what seems like a format that is widely used (HF and Civitai). r/comfyui: Welcome to the unofficial ComfyUI subreddit. Workflows are much more easily reproducible and versionable. Welcome to the unofficial ComfyUI subreddit. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. May 6, 2024 Β· Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. The MCAT (Medical College Admission Test) is offered by the AAMC and is a required exam for admission to medical schools in the USA and Canada. com/search/models?baseModel=SDXL%201. It JUST WORKS! I love that. ComfyUI Manager issue. Jul 6, 2024 Β· You can construct an image generation workflow by chaining different blocks (called nodes) together. So, as long as you don't expect comfyui not to break occasionally, sure give it a go. Sure, my paintbrush never crashed after an update, but then comfyui doesn't get crimped in my bag, my loras don't need cleaning, and a png is quite a bit cheaper than canvas. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Then comes the higher resolution by upscaling. Please share your tips, tricks, and workflows for using this software to create your AI art. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Thanks for explaining that! Totally makes sense. You can build an interactive, real-time dialogue game in ComfyUI with the theme of the Chinese mythological story "Journey to the West. GPT is responsible for scriptwriting, SDXL and Dall. A1111 is REALLY unstable compared to ComfyUI. And above all, BE NICE. py --normalvram. If ever you find some way of using ComfyUI your phone, please come back here and let me (us) know : -))) I've tried and the interface just doesn't move, and the Queue Prompt widget is fixed. YouTube playback is very choppy if I use SD locally for anything serious. 76 votes, 17 comments. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. If you don’t have t5xxl_fp16. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. We would like to show you a description here but the site won’t allow us. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. I created this subreddit to separate discussions from Automatic1111 and Stable Diffusion discussions in general. It all starts with "load checkpoint" node. Install ComfyUI Manager. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. The #1 social media platform for MCAT advice. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. But one of the really cool things is has is a separate tab for a "Control Surface". ComfyUI is also trivial to extend with custom nodes. I have a Nvidia GeoForce GTX Titian with 12GB Vram and 128 normal ram. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. For those of you familiar with FL Studio, and specifically with Patcher, you might know what I'm about to describe. com to make it easier for people to share and discover ComfyUI workflows. 1 or not. I've ensured both CUDA 11. It seems that the path always look to the root of ComfyUI not relative to the custom_node folder "comfyui-popup_preview". 17K subscribers in the comfyui community. 21K subscribers in the comfyui community. Please share your tips, tricks, and… Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. . Here are some examples I did generate using comfyUI + SDXL 1. I run some tests this morning. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); Welcome to the unofficial ComfyUI subreddit. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Next, install RGThree's custom node pack, from the manager. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 8 and PyTorch 2. 55 it/s for SD1. i've been "detailing" my images months ago. i dont mind changing too much my images because i think of the detailer as a step in the workflow. I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. ComfyUI runs SDXL (and all other generations of model) the most efficiently. I'm into it. A1111 is probably easier to start with: everything is siloed, easy to get results. " You play as the newly born Monkey King, Sun Wukong. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. 5 while creating a 896x1152 image via the Euler-A sampler. And run Comfyui locally via Stability Matrix on my workstation in my home/office. Please keep posted images SFW. safetensors or clip_l. A. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Please share your tips, tricks, and workflows for using this software to create your AI art… Welcome to the Reddit home for ComfyUI a graph/node style UI for Stable Diffusion. simply add LORAs into your workflow: https://civitai. 1 are updated and used by ComfyUI. I think for me at least for now with my current laptop using comfyUI is the way to go. /r/MCAT is a place for MCAT practice, questions, discussion, advice, social networking, news, study tips and more. Room for improvement (or, inquiring about… Aug 2, 2024 Β· Flux is a family of diffusion models by black forest labs. Invoke just released 3. Hi guys, Has anyone managed to implement Krea. This guy is your artist, he'll take care of all the drawing and painting and whatnot. 5 (+ Controlnet,PatchModel. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. Now you can manage custom nodes within the app. 27 votes, 12 comments. 0 with refiner. Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. A lot of people are just discovering this technology, and want to show off what they created. For seven months now. Basically, in patcher, you can string plugins together in much the same way as ComfyUI. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. On Linux with the latest ComfyUI I am getting 3. I've used those loaders but did not know thats what it is doing under the hood. txt" It is actually written on the FizzNodes github here Welcome to the unofficial ComfyUI subreddit. The graphic style Welcome to the unofficial ComfyUI subreddit. 23 votes, 21 comments. Belittling their efforts will get you banned. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. ) I haven't managed to reproduce this process i Install ComfyUI. Also I don't know when it has been changed, but ComfyUI is not a conda packet enviroment anymore, it depends from an python_embeded package, and generate an venv from it results in no tkinter. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. denoise% where denoise is the name of the widget value as shown on the node itself. Basically it doesn't open after downloading (v. to try to replicate magnific its a good starting point using stuff available of 6-5 months ago. exe -s -m pip install -r requirements. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. Hi Reddit! In October, we launched https://comfyworkflows. 53 it/s for SDXL and approximately 4. I'm starting to make my way towards ComfyUI from A1111. 0. VFX artists are also typically very familiar with node based UIs as they are very common in that space. E3 for creating the illustrations, and MS-TTS for delivering the spoken dialogues in various voices. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. uruh ynhzvb fct vpjrjx xwiryfa tkqbnz fany dddd zrrrr ahkcm