Theta Health - Online Health Shop

Comfyui upscale beds reddit

Comfyui upscale beds reddit. 5 to get a 1024x1024 final image (512 *4*0. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. The upscale quality is mediocre to say the least. Thanks Here is a workflow that I use currently with Ultimate SD Upscale. 2 This is a community to share and discuss 3D photogrammetry modeling. Upscale and then fix will work better here. There's "latent upscale by", but I don't want to upscale the latent image. 5 if you want to divide by 2) after upscaling by a model. This. This is done after the refined image is upscaled and encoded into a latent. The workflow is kept very simple for this test; Load image Upscale Save image. 0. The final node is where comfyui take those images and turn it into a video. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. It works more like DLSS, tile by tile and faster than iterative one. Ugh. It uses CN tile with ult SD upscale. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Both these are of similar speed. 1-0. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). I was working on exploring and putting together my guide on running Flux on Runpod ($0. And above all, BE NICE. I had the same problem and those steps tanks performances as well. A lot of people are just discovering this technology, and want to show off what they created. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I try to use comfyUI to upscale (use SDXL 1. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. . That's because latent upscale turns the base image into noise (blur). Please share your tips, tricks, and workflows for using this software to create your AI art. this is just a simple node build off what's given and some of the newer nodes that have come out. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 6 denoise and either: Cnet strength 0. Please share your tips, tricks, and workflows for using this… second pic. After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 😂 That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. It will replicate the image's workflow and seed. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. Also with good results. Still working on the the whole thing but I got the idea down Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. You just have to use the node "upscale by" using bicubic method and a fractional value (0. But it's weird. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. 2 options here. For some context, I am trying to upscale images of an anime village, something like Ghibli style. A step-by-step guide to mastering image quality. Instead, I use Tiled KSampler with 0. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. 5), with an ESRGAN model. However, I switched to Ultimate SD Upscale custom node. /r/StableDiffusion is Many people also have a hard time learning from written documents and need visual learning. I generate an image that I like then mute the first ksampler, unmute Ult. Hope someone can advise. You end up with images anyway after ksampling so you can use those upscale node. I have a custom image resizer that ensures the input image matches the output dimensions. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Try immediately VAEDecode after latent upscale to see what I mean. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. 19K subscribers in the comfyui community. This means that your prompt (a. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. I did once get some noise I didn't like, but rebooted & all was good second try. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. Please keep posted images SFW. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. Depending on the noise and strength it end up treating each square as an individual image. I too use SUPIR, but just to sharpen my images on the first pass. You can also run a regular AI upscale then a downscale (4x * 0. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Welcome to the unofficial ComfyUI subreddit. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. embed. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. There are also "face detailer" workflows for faces specifically. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP 10 votes, 18 comments. 9 , euler That's because of the model upscale. No attempts to fix jpg artifacts, etc. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 25- 1. 43 votes, 16 comments. safetensors (SD 4X Upscale Model) The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. report. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I’m new to ComfyUI and I’m aware that people create amazing stuff with just prompts and detailers. 9, end_percent 0. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. Please share your tips, tricks, and… Grab the image from your file folder, drag it onto the entire ComfyUI window. If it’s a close up then fix the face first. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. So I made a upscale test workflow that uses the exact same latent input and destination size. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. There is a face detailer node. g. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 10K subscribers in the comfyui community. 5 "Upscaling with model" and then denoising 0. But I probably wouldn't upscale by 4x at all if fidelity is important. Thanks for all your comments. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 20K subscribers in the comfyui community. One does an image upscale and the other a latent upscale. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. 5 denoise. They also want the details on how and why to do something besides just a guide to load this json and use it. SD upscaler and upscale from that. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. save. Also, if this is new and exciting to you, feel free to post . Thanks! Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… Latent upscale is different from pixel upscale. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. This will allow detail to be built in during the upscale. Upscale x1. 5 models (seems pointless to go larger). The downside is that it takes a very long time. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. Thank "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). 5 upscale) upscaler to ksampler running 20-30 steps at . I haven't been able to replicate this in Comfy. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. And when purely upscaling, the best upscaler is called LDSR. I solved that with using only 1 steps and adding multiple iterative upscale nodes. 5=1024). 49 votes, 12 comments. And here's my first question : Is one better than the other as far as final upscaled image quality? I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. articles on new photogrammetry software or techniques. Latent quality is better but the final image deviates significantly from the initial generation. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. 0 Alpha + SD XL Refiner 1. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Also, both have a denoise value that drastically changes the result. It's why you need at least 0. I upscaled it to a resolution of 10240x6144 px for us to examine the results. 2 and resampling faces 0. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec - image upscale is less detailed, but more faithful to the image you upscale. I want to upscale my image with a model, and then select the final size of it. Reply reply Top 1% Rank by size Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. a. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Thanks. 17K subscribers in the comfyui community. 5 noise ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096, and then downscale with nearest-extact back to 1500. k. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. It depends on how large the face in your original composition is. 5, euler, sgm_uniform or CNet strength 0. Belittling their efforts will get you banned. I like how IPAdapter with masking allows me to not have to write detailed prompts, and yet still maintains the fidelity of the subject and background - or any other masked elements for that matter. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. - latent upscale looks much more detailed, but gets rid of the detail of the original image. it's nothing spectacular but gives good consistent results without These comparisons are done using ComfyUI with default node settings and fixed seeds. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. jypcfb qhirii jnmtqx bqnxi vxzsir uwdusd moc jhocdt zjiyb zydyjjvx
Back to content