Comfyui cut by mask not working. Welcome to the unofficial ComfyUI subreddit.


Comfyui cut by mask not working Hello u/Ferniclestix, great tutorials, I've watched most of them, really helpful to learn the comfyui basics. After the first pass, toss the image into a preview bridge, mask the hand, adjust the clip to emphasize hand with negatives of I also Not sure how you'd do that with Ultralytics, but I think it could be done with segmenting. Same problem here. Thanks to u/Barbagiallo. This video explains the parameters of "MASK to SEGS. 19K subscribers in the comfyui community. It’s like this: write, gen, switch, mask, gen, switch, mask, gen, switch, mask, upscale. mask. In Comfy UI, masks are essential, especially in the context of in-painting. Members Online • One Welcome to the unofficial ComfyUI subreddit. 0 with an inpainting model. It is usefull for creating square crops when working with controlnets, ip-adapters etc that need 1:1 ratio input like 1024x1024 for example. 44. So, a friend and I are running the exact same workflow, with the exact same input images, and their Mask To Segs is working, while mine is doing this? A hundred different little pixel grabs. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and I'm learning how to do inpainting (Comfyui) and I'm doing multiple passes. (Copy paste layer on top). I am not sure if I should install a custom node I am not a programmer, but I don't think thats hard to do. Leveraging the powerful linking capabilities of NDI, you can access NDI video stream frames and send images generated by the model to NDI video streams. You can use rgb attention masks or you can just make a negative of your mask and use that as a second mask is probably the most straight forward way. force_resize_width, force_resize_height: Resize dimensions. 29 seconds. Tips: Use with Mask To Region for precise inpainting targets. )Then just paste this over your image A using the mask. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. Reply reply This is a low-dependency node pack primarily dealing with masks. Thank you so much! I had been following this post here which wasn't doing that. White is the masked area, black is not masked and grey is everything in between (just like typical Masks in PS and co. A mask is typically a black and white or grayscale image used to select specific areas within another image. A series of tutorials about fundamental comfyUI skillsThis tutorial covers masking, inpainting and image manipulation. The height of the mask. load your image to be inpainted into the mask node then right click on it and go to edit But for some reason this project does not mask the way the others have. is created. ; op - The operation to perform. Cheatsheet for ComfyUI Mask Editorhttps://github. Skip to content. Use SetLatentNoiseMask instead of that node. https://github. safetensors. it will not work. When I pipe the batch of images into a Paste By Mask node with randomized mask locations, it creates 3 different versions of the base image with 1 sticker applied to each version—- Instead I would like for it to create a single image with all 3 stickers stacked on top of it by looping back the output image to have the next batched sticker overlayed onto it And I'm not talking about the mouse not being able to 'mask' it there. max_weights_gcuts: The maxium weight of G cuts range from 1 to Cut By Mask. 0+ - Efficient Loader (5) - KSampler (Efficient) (5) Masquerade Nodes - Cut By Mask (3) - Paste By mask_for_crop 5: Mask of the image, it will automatically be cut according to the mask range. The value to fill the mask with. For example: I have an animated 3 color logo. image load -> right click -> mask-editor draw some mask and save, mask show on the preview window. This allows you to latent/image sent do "image receiver ID1" Crop Mask Documentation. Plan and track work Code Review. While working on my inpainting skills with comfyUI, I read up the documentation about the node "VAE Encode (for inpainting)". 0+ - Apply ControlNet Stack (1) - Control Net Stacker (1) - KSampler (Efficient) (1) - HighRes-Fix Script (1) Masquerade Nodes - Paste By Mask (1) - Cut By Mask (3) - Mask To Region (1) - Mask Morphology (1) - Image To Mask (1) ReActor Node for ComfyUI - ReActorFaceSwap (1) WAS Node Suite - Images to RGB (1) Welcome to the unofficial ComfyUI subreddit. Made with 💚 by the CozyMantis squad. Runs on CPU and CUDA. The width of the mask. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which When using the Impact Pack's detailer, you can mask the area to inpaint and use MaskToSEGS with DetailerForEach to crop only the masked area and the surrounding area specified by crop_factor for inpainting. I also tried moving them to a different folder in A1111 (models\embeddings), and changing names of the texuals (and using that name in the prompt). Add the 'Mask Bounding Box' plugin Attach a mask and image Output the resulting bounding box and Welcome to the unofficial ComfyUI subreddit. r. When you load there wil Created by: Ryan Dickinson: Features - Depth map saving - Open Pose saving - Animal pose saving - Segmentation mask saving - Depth mask saving -- without Segmentation mix -- with Segmentation mix 101 - starting from scratch with a better interface in mind. This node is particularly useful for creating solid or semi-transparent masks that can be used in various image processing tasks. Members Online Duchesses of Worcester - SDXL + COMFYUI + LUMA Range of how many N cuts you may want, set both to 0 to disable it. Easy to do in photoshop. Nope, didn't work reinstalling it. 5 models sdxlfacedetail workflow. Would you pls show how I can do this. Also I have noticed, even though one might not choose the mask editor for very precise work, that my wacom tablet does not work with the editor. intersection (min) - The minimum, value between the two masks. Solid Mask¶ The Solid Mask node can be used to create a solid masking containing a single value. FaceDetailer now working again with an update of ComfyUI and all custom nodes. " Apply that mask to the controlnet image with something like Cut/Paste by mask or whatever method you prefer to blank out the parts you don't want. 3. I have a workflow that does this with 4 images and one background so I can use 4 different Loras in one image. Inputs: image: The image or mask to cut. I follow the video guide to right-click on the load image node. Just take the cropped part from mask and literally just superimpose it. Just earlier today it was working fine. now it's possible to invert the mask via toggle. Could you please modify script so that it woul provide secondary output as mask (so first image1 will be ignored) very helpful node. class CutByMask: """ Cuts the image to the bounding box of the mask. height. Nothing seems to work! 😔 To ComfyUi they don't seem to exist. While it may not be very intuitive, the simplest method is to use the ImageCompositeMasked node that comes as a default. 1_SD1. Is it possible using WAS pack? I still struggle to understand the application of all the nodes in there. got prompt Prompt executed in 0. Prepping all the prompts before anything I'm working on the future AP Workflow 6. Just use your mask as a new image and make an image from it (independently of image A. fill_mask_holes: Whether to fully fill any holes (small or large) in the mask, that is, mark fully enclosed areas as part of the mask. invert_mask: Whether to reverse the mask. A lot of people are just discovering this technology, and want to show off what they created. Can be combined with ClipSEG to replace any aspect of an SDXL image with an SD1. Paste By Mask. Closed zhangfeifei0907 opened this issue May 11, 2024 · 2 comments the new mask editor looks good. There might be a better way but it does work if you really need it. y. The "frame" - border margins of the image - has its size defined via the pixels That way you just have to generate, load the result in Inpainting, load the mask, switch LoRA, and regenerate. " Many parameters are commonly used in other nodes as well. It allows users to define the region of interest by specifying coordinates and dimensions, effectively extracting a portion of the mask for further processing or analysis. Navigation Menu erosDiffusion / ComfyUI-enricos-nodes Public. Outputs: The cut image or mask. Those elements are isolated as masks. Notifications You must be signed in to change notification settings; MASK - MASK not work #593. This would reduce the number of noodles outputting from a VAE Decode when you're going to send the image on to somewhere else, and especially so if you just need to do a quick mask. Welcome to the unofficial ComfyUI subreddit. With `ImageCompositeMasked`, it is possible to composite only the masked area of the source image onto the destination image. But current the mask just sits on the top of the footage and does Both did not solved this, all is separated now and sd1. You have a few options, but for this image I'd go with, "Mask from Color," but you may need to change your colors depending on their values. Closing the issue :) This is a node pack for ComfyUI, primarily dealing with masks. 0 20240312. Additionally, Bitwise (mask Outline Mask: Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by default, you also end up painting the area around it, so the subject still loses detail IPAdapter: If you have to regenerate the subject or the background from scratch, it A powerful set of mask-related nodes for ComfyUI. The `mask_mapping_optional` input can be provided from a 'Separate Mask Components' node to cut multiple pieces out of a single image in a batch. You switched accounts on another tab or window. example¶ example usage text with workflow image Or a separate mask for the space around the mask. Either positive (mask out what we want) or negative masking (mask out what we don't want) would work for us but we can't figure out the workflow for it. Also, if this is new and exciting to you, feel free to Fast, VRAM-light ComfyUI nodes to generate masks for specific body parts and clothes or fashion items. Anyway I figured someone out there has already accomplished this so we thought we'd ask. "VAE Encode for inpainting" should be used with denoise of 100%, it's for true inpainting and is best used with inpaint models but will work with all models. Please keep posted images SFW. detect: Detection method, min_bounding_rect is the minimum bounding rectangle of block shape, max_inscribed_rect is the maximum inscribed rectangle of block shape, and mask-area is the effective area for masking pixels. blur_mask_pixels: Grows the mask and blurs it by the specified amount of pixels. Collaborate outside of code Code Search. py file. I'm noticing that with every pass the image (outside the mask!) gets worse. If Convert Image to Mask is working correctly then the mask should be correct for this. Below is a source image and Please keep posted images SFW. Sounds like you're trying to render a masked area on top of the whole image while it also attempts to render the whole image. difference - The pixels that are white in the first mask but black in the second. the tools are hidden. it moves the image around, so However, when opening the mask editor, the mask displays correctly inside the editor, and the images with the applied masks are correctly saved in the "input" folder. example¶ example usage text with workflow image Mask painted with image receiver, mask out from there to set latent noise mask. unfortunately I have not found any “save to node” or whatever button, so basically I got stuck in there and had no other choice than to shut down ComfyUI Front-end version: 1. I believe you need the bitwise operations or use the mask invert + cut by mask and paste by mask. ; multiply - The result of multiplying the two masks together. Same as The ComfyUI Mask Bounding Box Plugin provides functionalities for selecting a specific size mask from an image. Please share your tips, tricks, and workflows for using this Also, if this is new and exciting to you, feel free to post, but don't spam all your work. image1 - The first mask to use. bitwise_and to mask an image with a binary mask. 5. log located in the ComfyUI_windows_portable folder. clipseg import CLIPDensePredT" to the top code in the MaskNodes. Actual Masks Not Appearing Correctly: Ensure the prompts are accurate and relevant to the image content. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. I have configured a workflow that recognizes the hair area and the face area, and then subtracts the face mask from the hair mask to change the hairstyle, but currently the Bitwise (Mask - Mask) feature is not working properly. ! You signed in with another tab or window. you can use 0. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is there will be no I'm trying the conditioning mask node but it doesn't appear to be working. ). Expected Behavior. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Then, I turn those A very, very basic demo of how to set up a minimal Inpainting (Masking) Workflow in ComfyUI using one Model (DreamShaperXL) and 9 standard Nodes. The height of the area in pixels. Please share your tips, tricks, and workflows for using this software to create your AI art. if you use 0 it will corrupt the image and you will get mixed loras. I have had my suspicions that some of the mask generating nodes might not be generating valid masks but the convert mask to image node is liberal enough to accept masks that other nodes might not. And maybe have an option to choose the alpha channel, black, white, alpha and disable. This can easily be You signed in with another tab or window. You signed out in another tab or window. 433 2bfcbf1 x64 The ksampler's output (with brushnet) is fried, as if the entire image got passed through low denoise and not just the mask. Inputs: image_base: The base You signed in with another tab or window. Please share your tips, tricks, and workflows for using this software to create your AI I've managed so far was using masquerade nodes which contained a mask2region node to get the region crop around the mask, cropbyregion node, work on the region then pastebyregion node to paste it back. If you set guide_size to a low value and force_inpaint to true, inpainting is done at the original size. annoying for comfyui. Mask Floor Region: Return the lower most pixel values as white (255) Mask Threshold Region: Apply a thresholded image between a black value and white value It would be extremely helpful, if we had a checkbox that cuts off all what comes before the last slash, so that only realistic_vision_v5. However, when opening the mask editor, the mask displays correctly inside the editor, the new mask editor looks good. Share Add a Comment. I'm using firefox so perhaps this is a firefox bug? Are there madlads out here working on a LoRA mask extension for ComfyUI? That sort of extension exists for Auto1111 (simply called LoRA Mask), and it is the one last thing I'm missing between the two UIs. Thanks in advance. Let say with Realistic Vision 5, if I don't use the ok, so I tried all of these solutions, and they work great for images, but I need something that works well for video. I have no idea why, but normally everything around the mask would be cut out only leaving the mask visible. Maybe a simple node with an image input and a mask output would do it. com/ltdrdata/ComfyUI-Impact-Pack Mask Dilate Region: Dilate the boundaries of a mask; Mask Fill Region: Fill holes within the masks regions; Mask Ceiling Region": Return only white pixels within a offset range. Instead of using techniques like virtual DOM diffing, Svelte A beginner video to show the basics and some of the possibilities of masking in ComfyUI. 1 for 2 loras, like the author explains in the notes. exe -V Download prebuilt Insightface package If I inpaint mask and then invert it avoids that area but the pesky vaedecode wrecks the details of the masked area. And above all, BE NICE. Reload to refresh your session. usage The tresh input should be a gray image, possibly a mask in black and white but not necessarily (read thresholds). Adjust the precision and normalization settings. more minor fixes Beta 3 Fixed a lot with the first masking and proper background removal skip, it really frustrated me as nothing I ComfyUI Impact Pack - UltralyticsDetectorProvider (1) - PreviewBridge (10) - FaceDetailer (1) Efficiency Nodes for ComfyUI Version 2. You signed in with another tab or window. Senders save their input in a temporary location, so you do not need to feed them new data every gen. The y coordinate of the area in pixels. It uses YoloV8 - Ultralytics - with COCO 80 objects for detection added with a specific feature for face detection. Belittling their efforts will get you banned. This dataset focuses on complicated real scenarios. White is the sum of maximum red, green, and blue channel values. Sign in nullquant / ComfyUI-BrushNet Public. you can still use custom node manager to install whatever nodes you want from the json file of whatever image, but when u restart the app delete the custom nodes manager files and the comfyui should work fine again, you can then reuse whatever json The img2img pipeline has an image preprocess group that can add noise and gradient, and cut out a subject for various types of inpainting. I went into Automatic1111, it crashed while I was switching models (safetenors). Thanks. I'm trying to use the FaceDetailer node from the ComfyUI Impact Pack. Mask editor already works on the Save Image node. My sample pipeline has three sample steps, with options to persist controlnet and mask, regional prompting, and upscaling. I've rebooted. Discord: Join the community, friendly Basically, I'd like to find a face, or an object, using ClipSeg Masking, than put a boundary around that mask and copy only that part of the image/latent to be pasted into another image/latent. Find more, search less Explore. Masks can be used for a variety of tasks, including cutting out one image from another, selecting areas for specific processes, and more. zip. 1 and I'm trying something that is a bit challenging. I tested and found that VAE Encoding is adding artifacts. Use cv2. (Problem solved) I am a beginner at learning comfyui. the implementation of mask was not correct. mask_mapping_optional: Mapping for variable masks. If you also could get rid of the . For example, if you do pure red at 255 0 0, then it may not like it if you go with 100 0 0, and instead you'll need to mix in some other color a tad to work. Create Simple Mask: The CreateSimpleMask node is designed to generate a basic mask with specified dimensions and intensity. Don't use VAE Encode (for inpaint) That is used to apply denoise at 1. It turned out that the problem is in the version, the mask is not generated on Adobe Photoshop Version: 24. Is the bottom procedure right?the inpainted result seems unchanged Created by: Ryan Dickinson: Beta 3 - extended replaced the color of the padding to black to stop coco from detecting nothing. comfyui节点文档插件,enjoy~~. invert_mask: Whether to fully invert the mask, that is, only keep what was marked, instead of removing what was marked. I need to create an animated mask for each color. 5 output. I updated comfy, completely deleted and then reinstalled the impact pact and it's still not working. ComfyUI does not use the step number to determine whether to apply conds; To mask out just green apple, use [CUT:green apple, red apple:green_apple] which will result in a masked prompt of + +, Note that the optimization only works if the text input to the lazy nodes is a constant Now ComfyUI doesn't work Del'd folder, reinstalled via git clone but not working still . 1,0. The width of the area in pixels. import os The mask editor suck. ltdrdata / ComfyUI-Impact-Pack Public. Closed mi8m opened this issue Aug 12, 2024 I think I've found a way to get close by adding "from clipseg. Class name: CropMask Category: mask Output node: False The CropMask node is designed for cropping a specified area from a given mask. new feature: invert mask option. Notifications You must be signed in to change Mask not working? #137. You can cut/paste based on a mask - depending on the resize behavior it will go over the mask boundaries. unfortunately I have not found any “save to node” or whatever button, so basically I got stuck in there and had no other choice than to shut down the app. For anyone wondering, as I do not see this issue Either the image doesn't show up in the mask editor (it's all a black canvas the size of the image), or it does, but the save to node button does nothing: upon closing the mask Hello, Can anyone help me with an alternative of cut by mask node that cut images according to mask without changing the resolution of the image? After drawing the mask and saving it to the node, the mask does "not" display correctly within the node. real-time input output node for comfyui by ndi. yeah ps will work fine, just cut out the image to transparent where you want to inpaint and load it as a separate image as mask. I use the Object Swapper function to detect certain elements of a source image. In this scenario I'd use 100 25 0 or 100 0 25. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Any other ideas? I figured this should be easy. right click Load Image Node and Open MaskEditor should be open a draw window and I can draw anything. It includes an option called "grow_mask_by" which is described as the following in ComfyUI documentation : Are you uploading the Mask like that? The mask needs to be black and white. 0 20230512. x. . The x coordinate of the area in pixels. mIoU on You signed in with another tab or window. There's a node called GroundingDINO Segment Anything (or something very similar) and it lets you do masking by prompt, so you could prompt for "female face" and it will mask all female faces detected. new feature: angle output the angle of rotation is now accessible in the output (and soon the bounding box x,y, width and height). The mask filled with a single value. However, I found that there is no Open in MaskEditor button in my node. The mask to be cropped. So far (Bitwise mask + mask) has only 2 masks and I use auto detect so mask can run from 5 too 10 masks. width. This technique can be Hi, guys. The Separate Mask Components node is designed to break down a given mask into multiple contiguous components, making it easier to work with individual segments of an image. All features (ComfyUI Portable) From the root folder check the version of Python: run CMD and type python_embeded\python. Use a smart masking node (like Mask by Text though there might be better options) on the input image to find the "floor. 500 3fc8e4c x64 Send to layers also does not work on this version Currently I am using Adobe Photoshop Version: 25. This node is particularly useful for tasks that require isolating different parts of an image for separate processing or analysis. Remember that the model will try to blur everything together (styles and colors) but if you use a generic checkpoint you'll be able to merge any style together (eg: photorealistic and cartoonish) with incredibly low effort. SEGS->SEGS to MASK (Combined) -> CROP MASK (to right size) -> Apply I need to combine 4 5 masks into 1 big mask for inpainting. (enable --disable-all-custom-nodes) Welcome to the unofficial ComfyUI subreddit. if you want add more mask area, image load->right click -> mask-editor . There are nodes set up in the workflow below. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. It's weird cause it is working fine for my friend. If force_resize_width or force_resize_height are provided, the image will be resized to those dimensions. There's something I don't get about inpainting in ComfyUI: Why do the inpainting models behave so differently than in A1111. Get the Segment Anything custom node pack if you don't already have it. Navigation Menu Toggle navigation. the old mask was clean . min_colums max_colums: Range of how many G cuts you may want, set both to 0 to disable it. add a 'load mask' node, and add an vae for inpainting node, plug the mask into that. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. good image cut + good mask cut = bad inpaint #22 opened Mar 21, 2024 by dangerweenie. safetensors, that would be a bonus. If you want to do img2img but on a masked part of the image use Hi, is there an analogous workflow/custom nodes for WebUI's "Masked Only" inpainting option in ComfyUI? I am trying to experiment with Animatediff + inpainting but inpainting in ComfyUI always generates on a subset of pixels on my original image so 193 votes, 43 comments. mIoU on Efficiency Nodes for ComfyUI Version 2. Also if you want better quality inpaint I Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Can someone tell me what I'm overlooking, and why I can't make this work? Beta Was this Crop Mask Documentation. That Please keep posted images SFW. Open Rawshanchik opened this issue Nov 18 There should be a (true/false) toggle to actually save the image. I tried blend image but that was a mess. EDIT: SOLVED; Using Masquerade Nodes, I applied a "Cut by Mask" node to my masked image along with a "Convert Mask to Image" node. The cropped mask. suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. from 0 -> 20 steps, the second from 10 -> 20 steps. 6. Notably, it contains a " Mask by Text " node that allows dynamic creation of a mask Returns a mask, in image format, with the result of the grabcut. mask: The mask specifying the area to cut. First Automatic1111 seems to have made the process so easy, that I have become stupid and cannot make it work in ComfyUI. Manage code changes Discussions. Feel like theres prob an easier way but this is all I could figure out. There should be 2 outputs: IMAGE and MASK. Here's a example: Input image (left), Mask (right) Welcome to the unofficial ComfyUI subreddit. Added cropping to end added load video group to just before the upscaling to allow the reloading of previous videos or full. Fast, VRAM-light ComfyUI nodes to generate masks for specific body parts and clothes or fashion items. I was going to make a post regarding your tutorial ComfyUI Fundamentals - Masking - Inpainting. My postprocess includes a detailer sample stage and another big upscale. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. Although the masks are displayed and saved properly, they still do not appear on the edited image in the Load Image node. 5 has its own clip neg and positive that go to the pipe, still wont upscale the face wth sd1. The author recommends using Impact-Pack instead (unless you specifically have trouble installing dependencies). I've updated Comfyui and all custom_nodes, and even after rebooting, it's still the same. union (max) - The maximum value between the two masks. com/comfyanonymous/ComfyUI The textual inversions are WORKING, when I simply: leave them in the folder where they are (and have always BEEN, back when they used to be recognized and working properly in the UI), and I can simply include them in my prompt, we use clipseg to mask the 'horse' in each frame seperately We use a mask subtract to remove the masked area #86 from #111 then we blend the resulting #110 with #86 to get #113, this creats a masked area with highlights on all inpainting is kinda. ; image2 - The second mask to use. Unexpected Results with Combine But for mask you can just chop out an area to transparent in gimp/ps and load that image as mask in another node. Any white pixels on the mask (values with 1) will be kept while black pixels (value with 0) will be ignored. So you have 1 image A (here the portrait of the woman) and 1 mask. Notifications You must be signed in to change notification Cut by mask output #79. outputs¶ MASK. Ok solved it . Contribute to CavinHuang/comfyui-nodes-docs development by creating an account on GitHub. I did this to mask faces out of a lineart once but didn't do it in a video. Now ComfyUI doesn't work. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. Ideas/requests Any idea to make fast inpaint working in image from others? #3 opened Jul 19, 2023 by I try to use Mask&mask,mask - mask,mask +mask,but the result is the same. Any ideas? I can do this via a regional conditioning by color mask node, but I have to manually add the hex value. ! Basically, I'd like to find a face, or an object, using ClipSeg Masking, than put a boundary around that mask and copy only that part of the image/latent to be pasted into another image/latent. bugfix: fix cut images on swap due to wrongly preserved width and height Welcome to the unofficial ComfyUI subreddit. A very simple ComfyUI node to help you create image crops and masks using YoloV8. Even if you set the size of the masking circle to max and go over it close enough so that it appears to be fully masked, if you actually save it to the node and then take a look you'll see that the bottom is still not masked. You could also cut, make the image bigger and do your edits, then paste it back in with a larger mask. It is used to set most of the grabcut input mask's flags, excluding GC_BGD (sure background) which are set by the "frame". LIP is the largest single person human parsing dataset with 50000+ images. for anyone who continues to have this issue, it seems to be something to do with custom node manager (at least in my case). I can convert these segs into two masks, one for each person. mask_for_crop 5: Mask of the image, it will automatically be cut according to the mask range. inputs¶ value. A LoRA mask is essential, given Thing you are talking about is "Inpaint area" feature of A1111 that cuts masked rectangle, passes it through sampler and then pastes back. As for the generation time, you can check the terminal, and the same information should be written in the comfyui. 5 model, i have no clue what is going on, i dont want to use sdxl cause its not great with details like some trained 1. This seems to work with a dynamic workflow, mask based on face gender detection. zrqesp aehh mlmx cspjm oia just kbtjt xotz bfycljy xbtc