Comfyui adetailer reddit This is the first time I see Face Hand adetailer in Comfyui workflow /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. Maybe I will fork the ADetailer code and add it as an option. Reply reply More replies Top 1% Rank by size Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. Comfy speed comparison. All packages were forked directly from the #! repositories/Github and changed only where necessary to keep it up to date with newer packages. I'm actually using aDetailer recognition models in auto1111 but they are limited and cannot be combined in the same pass. I guess with adetailer denoising at 0. Tweaked a bit and reduced the basic sdxl generation to 6-14 seconds. The video was pretty interesting, beyond the A1111 vs. Comparison: 128 votes, 32 comments. com) 機能拡張マネージャーから入手できます。 We would like to show you a description here but the site won’t allow us. IOW, their detection maps conform better to faces, especially mesh, so it often avoids making changes to hair and background (in that noticeable way you can sometimes see when not using an inpainting model). I am using AnimatedIff + Adetailer + Highres, but when using animatediff + adetailer in webui, the face appears unnatural. Please share your tips, tricks, and workflows for using this… Thanks for the reply - I’m familiar with ADetailer but I’m actually deliberately looking for something that does less. try default settings. com) 機能拡張マネージャーから入手できます。 BTW, that pixelated image looks like it could be because the wrong VAE is being used. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. I've also seen a similar look when ADetailer is used for Turbo models with certain samplers. Belittling their efforts will get you banned. Noticed that speed was almost the same with a1111 compared to my 3080. #!++ a lightweight Debian-based distribution featuring the Openbox and GTK+ applications. " It will attempt to automatically detect hands in the generated image and try to inpaint them with the given prompt. Anything wrong here with this workflow?. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 2 noise value it changed quite a bit of face. ComfyUI only has ReActor, so I was hoping the dev would add it too. Welcome to the unofficial ComfyUI subreddit. I even tried adetailer, but Roop always happens after adetailer, so it didn't help either. Here's the repo with the install instructions (you'll have to uninstall the wildcards you already have): sd-webui-wildcards-ad. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. The thing that is insane is testing face fixing (used SD 1. I just made the move from A1111 to ComfyUI a few days ago. I tried to upscale a low-res in img2img, with adetailer on, still doesn't do much. How exactly do you use it to fix hands? When I use default inpaint to fix hands, the result is also not so good, no matter the checkpoint and the denoise value. When I do two-pass the end result is better although still falls short from what I got on webui with adetailer, which is strange as they work in the same way from what I understand. Adetailer and other are just more automated extensions for it, but you don't really need to have a separate model to place a mask on a face (you can do it yourself), that's all that Adetailer and other detailer extensions do. ADetailer works OK for faces but SD still doesn't know how to draw hands well so don't expect any miracles. Any tips are greatly appreciated. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. Losing a great amount of detail and also de-aging faces on a creepy way. Going to python_embedded and using python -m pip install compel got the nodes working. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not turning out well. How to Install ComfyUI: https://youtu. Tried comfyui just to see. 0 of my AP Workflow for ComfyUI. The best use case is to just let it img2img on top of a generated image with an appropriate detection model, you can use the Img2Img tab and check "Skip img2img" for testing on a preexisting image like this one. I also had issues with this workflow with unusually-sized images. Then i bought a 4090 a couple of weeks ago (2 i think). Is stable diffusion's adetailer just better? Does it also upscale the mask? Sometimes in comfyui I even get worse results than the preview. I am curious if I can use Animatediff and adetailer simultaneously in ComfyUI without any issues. Or just throwing the image to img2img and running adetailer alone (with skip img2img checked) then photoshopping the results to get good hands and feet. g. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. Under the "ADetailer model" menu select "hand_yolov8n. That extension already had a tab with this feature, and it made a big difference in output. I didn't use any adetailer prompt. I will have to play with it more to be sure it's working properly but it looks like that may have been the issue. The easiest solution to that is to specify a different sampler for ADetailer. Before switching to ComfyUI I used FaceSwapLab extension in A1111. Dec 15, 2024 · I come from Forge UI and the way it's done there is HiRes Fix -> ADetailer. 5ms to generate. ya been reading and playing with it for few days. This one took 35 seconds to generate in A1111 with a 3070 8GB with a pass of ADetailer I observed that using Adetailer with SDXL models (both Turbo and Non-Turbo variants) leads to an overly smooth skin texture in upscaled faces, devoid of the natural imperfections and pores. Now, a world of possibilities has opened; next, I will try to use a segment to separate the face, upscale it, add a lora or detailer to fine-tune the face details, rescale to the image source size, and paste it back. the amount of control you can have is frigging amazing with comfy. com/ltdrdata/Comf Regarding the integration of ADetailer with ComfyUI, there are known limitations that might affect this process. Just make sure you update if it's already installed. Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. You can use Segs detailer in ComfyUI which if you create a mask around the eye, it will upscale the eye to a higher resolution of your choice like 512x512 and downscale it back. tl;dr just check the "enable Adetailer" and generate like usual, it'll work just fine with default setting. Okay, so it's complete test out and refiner is not used as img2img inside ComfyUI. It's amazing the quality of images that you can get with simple prompts, even panoramic images. . currently my "fix" for poor facial details at 1024x1024 resolution (SDXL) is two-cycle ksampling - ending the first sampler at 8/24 steps and… Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. Continued with extensions, got adetailer, control net etc with literally a click. 5 Its getting over saturation bc facedatailer essentially just detects where face is, crops that region along with a mask matching only the face, resizes that region to the max_size, then it does an img2umg at low denoise, after that it just resizes the regenerated face to original size and patches it into the Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… I was waiting for this. For something similar, I generate images with a low number of steps and no adetailer/upscaler/etc, then when I get one I like I'll drag it into back into the UI to recreate the exact workflow and up the step count/enable the extra quality features that were in groups set to bypass. i just want to be able to select model, vae if necessary, lora and thats it. 5 just to see to compare times) the initial image took 127. One for faces, the other for hands. If there is only one face in the scene, there is no need for a node workflow. I have to push around 0. So when I tried it that way in ComfyUI, it comes out a little weird (eyes too far apart or sharp lines, not consistent to the overall style) to really bad (extremely deformed, ears, eyes, and ears not where there is supposed to be any). be/ynfNJEtvUtQHow to Install Manager: https://youtu. i always wanted to get in to ComfyUI due to speed. pth Testing the same prompt keeps giving me the same result, except that this time is the eye on the right the one that came up good. Hello, I have been trying to find a solution to fix multiple faces in a single photo but I am unable to do so, a scene such as a bar full of people, if I use A1111 adetailer or ComfyUI Face detailer, every time there are more than 1 people in a photo, the face fixing just adds the same face for every single character. A lot of people are just discovering this technology, and want to show off what they created. Adetailer is actually doing something now, however minor. Next since that one is apparently kept more up to date and so far this has made a difference. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. (In webui, adetailer runs after the animatediff generation, making the final video look unnatural. Most "ADetailer" files i have found work when placed in Ultralytics BBox folder. See, this is another big problem with IP adapter (and me) is that it's totally unclear what all it's for and what it should be used for. I tried with "detailed face, realistic eyes, etc. used Eyes adetailer from civitai and sam_vit_l_0b3195. and 9 seconds total to refine it. I wanted to set up a chain of 2 facedetailer instances into my workflow. 21K subscribers in the comfyui community. To continue talking to Dosu, mention @dosu. Is there a way to have it only do the main (largest) face (or better yet, an arbitrary number) like you can in Adetailer? Any time there's a crowd, it'll try to do them all and it ends up giving them all the expression of the main subject. be/dyrhPVRsy9wComfyUI Impact Pack: https://github. If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. no prompt. turn adetailer on. It works just like the regular VAE encoder but you need to connect it to the mask output from Load Image. Any way to preserve the "lora effect" and still fix imperfect faces? BTW, that pixelated image looks like it could be because the wrong VAE is being used. true. 4 denoise) after roop and codeformer and then SD Ultimate and normal Upscaler with Ultra Sharp. That was the reason why I preferred it over ReActor extension in A1111. I tend to like the mediapipe detectors because they're a bit less blunt than the square box selectors on the yolov ones. im beginning to ask myself if that's even possible in Comfyui. Oct 29, 2023 · あるいはその代わりをするカスタムノードは? Facedetailer Facedetailer Comfy UIには「ADetailer」はありません。 そのかわりに「Facedetailer」というものがあります。 (19) ADetailer for ComfyUI : StableDiffusion (reddit. I've never tried to generate whole video with denoising 1, maybe I will give it a try. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. While I can select that script, and plug in the different ADetailer models, it does not seem to have any effect. Most of them already are if you are using the DEV branch by the way. Will add other image metadata display of things like models and seeds soon, they're already loaded from the file, just not in the UI yet. There are some distortions and faces look more proportional but uncanny. " but the results were basically the same. Mar 23, 2024 · Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. Adetailer model is for face/hand/person detection Detection threshold is for how sensitive it's detect (higher = stricter = less face detected / will ignore blurred face in background character) then mask that part Welcome to the unofficial ComfyUI subreddit. It picked up the loras, prompt, seed, etc. Is it true that we will forever be limited by the smaller size model from the original author? Can someone shed some light on it please? Thanks a lot. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Hi there. using face_yolov8n_v2, and that works fine. Use Ultralytics to get either a bbox/SEGS and feed that into one of the many Detailer nodes and you can automate a step to have it work on the face up close. 1st pic is without ADetailer and the second is with it. 23 votes, 21 comments. I am using adetailer (max. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. doing one face at a time with more control over the prompts. If adetailer is not capable of doing it, what's your suggestion? 27 votes, 38 comments. i'm looking for a way to inpaint everything except certain parts of the image. I use ADetailer to find and enhance pre-defined features, e. 0,9 seconds. Forgot even comfyui exist. Change max size in Facedailer node to 1024 whenever using sdxl models, 512 for sd1. A1111 is REALLY unstable compared to ComfyUI. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. I just released version 4. However, the latest update has a "yolo world model", and I realised I don't know how to use the yolov8x and related models, other than in pre-defined models as above. 25K subscribers in the comfyui community. But it's reasonably clean to be used as a learning However, I get subar results compared to adetailer from webui. The original author of adetailer was kind enough to merge my changes. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. The default settings for ADetailer are making faces much worse. CrunchBangPlusPlus (or #!++) is an effort to continue the #! environment. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. or want to add something similar to "adetailer" pluging from automatic1111 or a Hello guy, Sorry to ask, but i searched for hours, documentation internet, even the source code of Impact-Pack i found no way to add new bbox_detector. pt" and give it a prompt like "hand. Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. My guess -- and it's purely a guess -- is that ComfyUI wasn't using the best cross-attention optimization. I do a lot of plain generations, ComfyUI is It can help you do similar things that the adetailer extension does in A1111. This is the setup for the eye detailer. If you want to have good hands without precise control on pose, you add a LoRA, put "hands" on negative and use adetailer for the fine retouch if needed. More flexible. Giving me the mask and letting me handle the inpaint myself would give me more flexibility for eg. This wasn't the case before the updating to the newest version of A1111. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. I'm using ComfyUI portable and had to install it into the embedded Python install. Currently I don't think ComfyUI lets you output outside the output folder but we could add options for choosing subfolders within that and template based file names. Please keep posted images SFW. Clicking and dragging to move around a large field of settings might make sense for large workflows or complicated setups but the downside is, obviously, a loss of simple cohesion. And above all, BE NICE. OP, you can greatly improve your results by generating, and then using aDetailer on your upscale, and instead of using a singular aDetailer prompt, you can choose the option to prompt faces individually from left to right. Hi. also take out all the "realistic" eye stuff in ur pos/neg prompt that voodoo does nothing for better eyes, good eyes come from good resolution, to increase the face resolution during txt2img you use adetailer. I set up a workflow for first pass and highres pass. That said, I'm looking for a front-end face swap, something that will inject the face into the mix at the point of ksampler, so if I prompt for something like Freckles they won't get lost in the swap/upscale but I've still got my likeness. My main source is Civitai because it's honestl Apr 24, 2025 · Hello I've been using stable diffusion for a while now and recently I've been trying to migrate to comfyui but I'm struggling with getting good results on the adetailer process. I want to install 'adetailer' and 'dddetailer', the installation instruction says it goes into the 'extensions' folder, but there is none in ComfyUI. and the adetailer repo: sd-webui-adetailer Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. If you want the ComfyUI workflow, let me know. and the adetailer repo: sd-webui-adetailer Welcome to the unofficial ComfyUI subreddit. Man, you're damn right! I would never be able to do this in A1111; I would be stuck into A1111's predetermined flow order. 149 votes, 33 comments. Now unfortunatelly, I couldnt find anything helpful or even an answer via Google / YouTube, nor here with the subs search funtion. 0. Despite relatively low 0. a few days ago installed it, speed is amazing but i cannot do anything almost. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. Update: I went ahead and reinstalled SD. Adetailer doesn't require an inpainting checkpoint or controlnet etc etc, simpler is better. Did not pick up the ADetailer settings (expected, though there are nodes out there that can accomplish the same things). ) To clarify, there is a script in Automatic1111->scripts->x/y/z plot that promises to let you test each ADetailer model, same as you would a regular checkpoint, or CFG scale, or number of steps. Help me make it better! Just tried it again and it worked with an image I generated in A1111 earlier today. Specifically, "img2img inpainting with skip img2img is not supported" due to bugs, which could be a potential issue for ComfyUI integration . Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. Help me make it better! As the title suggest, I'm using ADetailer for Comfy (the impact-pack) and works well, problem is I'm using a Lora to style the face after a specific person(s), and the FaceDetailer node makes it clearly "better" but kinda destroys the similarity and facial traits. We would like to show you a description here but the site won’t allow us. I wish there was some way to force adetailer only to a specific region to look for its subjects, that could help alleviate some of this. i managed to find a simple SDXL workflow but nothing else. One thing about human faces is that they are all unique. While that's true, this is a different approach. hello cool Comfy people! happy new year. 27 votes, 38 comments. We know A1111 was using xformers, but weren't told, as far as i noticed, what ComfyUI was using. I'm new to all of this and I've have been looking online for BBox or Seg models that are not on the models list from the comfyui manager. If adetailer is not capable of doing it, what's your suggestion? We would like to show you a description here but the site won’t allow us. Hell it probably works better with mcmonkey's implementation now that I understand the ins and outs. It is what only masked inpainting does automatically. And the new interface is also an improvement as it's cleaner and tighter. it's no longer maintained, do you have any recommendation custom node that can be use on ComfyUI (that have same functionality with aDetailer on A1111) beside FaceDetailer? someone give me direction to try ComfyUI-Impact-Pack, but it's too much for me, I can't quite get it right, especialy for SDXL. 3 it is not that important. Please share your tips, tricks, and workflows for using this software to create your AI art. But the problem I have with ComfyUI is unfortunately not with how long it takes to figure out, I just find it clunky. i get nice tutorial from here, it seems work. That way you can address each one respectively, eg. dyt jmhuqei bfy bfi ukulnjse mot tqclfq begsdq ssyj yiys