Sdxl medvram. Oof, what did you try to do. Sdxl medvram

 
 Oof, what did you try to doSdxl medvram  -

This allows the model to run more. stable-diffusion-webui * old favorite, but development has almost halted, partial SDXL support, not recommended. 18 seconds per iteration. I learned that most of the things I needed I already had since I hade automatic1111, and it worked fine. 1. Copying depth information with the depth Control. bat file. tif, . ReVision. • 3 mo. I cannot even load the base SDXL model in Automatic1111 without it crashing out syaing it couldn't allocate the requested memory. With a 3090 or 4090 you're fine but that's also where you'd add --medvram if you had a midrange card or --lowvram if you wanted/needed. 1600x1600 might just be beyond a 3060's abilities. Reply. --xformers-flash-attention:启用带有 Flash Attention 的 xformers 以提高再现性(仅支持 SD2. Got it updated and the weight was loaded successfully. So for Nvidia 16xx series paste vedroboev's commands into that file and it should work! (If not enough memory try HowToGeeks commands. 0 repliesIt's amazing - I can get 1024x1024 SDXL images in ~40 seconds at 40 iterations euler A with base/refiner with the medvram-sdxl flag enabled now. Extra optimizers. I have searched the existing issues and checked the recent builds/commits. 5 because I don't need it so using both SDXL and SD1. set COMMANDLINE_ARGS= --xformers --no-half-vae --precision full --no-half --always-batch-cond-uncond --medvram call webui. See more posts like this in r/StableDiffusionPS medvram giving me errors and just wont go higher than 1280x1280 so i dont use it. 10 in parallel: ≈ 4 seconds at an average speed of 4. 5gb. The controlnet extension also adds some (hidden) command line ones or via the controlnet settings. Normally the SDXL models work fine using medvram option, taking around 2 it/s, but when i use Tensor RT profile for SDXL, it seems like the medvram option is not being used anymore as the iterations start taking several minutes as if the medvram option is disabled. Normally the SDXL models work fine using medvram option, taking around 2 it/s, but when i use Tensor RT profile for SDXL, it seems like the medvram option is not being used anymore as the iterations start taking several minutes as if the medvram. bat or sh and select option 6. Beta Was this translation helpful? Give feedback. But it is extremely light as we speak, so much so the Civitai guys probably wouldn't even consider that NSFW at all. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savings It's not the medvram problem, I also have a 3060 12Gb, the GPU does not even require the medvram, but xformers is advisable. It's a small amount slower than ComfyUI, especially since it doesn't switch to the refiner model anywhere near as quick, but it's been working just fine. The sd-webui-controlnet 1. A Tensor with all NaNs was produced in the vae. @aifartist The problem was in the "--medvram-sdxl" in webui-user. At first, I could fire out XL images easy. 1 and 0. r/StableDiffusion. My GPU is an A4000 and I have the --medvram flag enabled. 새로운 모델 SDXL을 공개하면서. It's probably as ASUS thing. Comfy is better at automating workflow, but not at anything else. Specs: 3070 - 8GB Webui Parm: --xformers --medvram --no-half-vae. 1. Stable Diffusion XL(通称SDXL)の導入方法と使い方. -if I use --medvram or higher (no opt command for vram) I get blue screens and PC restarts-I upgraded AMD driver to latest (23-7-2) but it did not help. 既にご存じの方もいらっしゃるかと思いますが、先月Stable Diffusionの最新かつ高性能版である Stable Diffusion XL が発表されて話題になっていました。. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savingswithout --medvram (but with xformers) my system was using ~10GB VRAM using SDXL. 저와 함께 자세히 살펴보시죠. But it has the negative side effect of making 1. bat" asなお、SDXL使用時のみVRAM消費量を抑えられる「--medvram-sdxl」というコマンドライン引数も追加されています。 通常時はmedvram使用せず、SDXL使用時のみVRAM消費量を抑えたい方は設定してみてください。 AUTOMATIC1111 ver1. Divya is a gem. Workflow Duplication Issue Resolved: The team has resolved an issue where workflow items were being run twice for PRs from the repo. 0 but my laptop with a RTX 3050 Laptop 4GB vRAM was not able to generate in less than 3 minutes, so I spent some time to get a good configuration in ComfyUI, now I get can generate in 55s (batch images) - 70s (new prompt detected) getting a great images after the refiner kicks in. bat. FNSpd. SDXL and Automatic 1111 hate eachother. Note that the Dev branch is not intended for production work and may break other things that you are currently using. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savingsfinally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. ReplyWhy is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. 0 base without refiner at 1152x768, 20 steps, DPM++2M Karras (This is almost as fast as the 1. Please use the dev branch if you would like to use it today. add --medvram-sdxl flag that only enables --medvram for SDXL models prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) ( #12457 ) OnlyOneKenobiI tried some of the arguments from Automatic1111 optimization guide but i noticed that using arguments like --precision full --no-half or --precision full --no-half --medvram actually makes the speed much slower. 0 models, but I've tried to use it with the base SDXL 1. You definitely need to add at least --medvram to commandline args, perhaps even --lowvram if the problem persists. 2 / 4. As I said, the vast majority of people do not buy xx90 series cards, or top end cards in general, for games. 9. It feels like SDXL uses your normal ram instead of your vram lol. 0C2F4F9EAB. Even with --medvram, I sometimes overrun the VRAM on 512x512 images. SDXL Support for Inpainting and Outpainting on the Unified Canvas. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savingsMedvram has almost certainly nothing to do with it. set COMMANDLINE_ARGS= --medvram --autolaunch --no-half-vae PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0. For a few days life was good in my AI art world. It was easy and dr. 8 / 2. 7gb of vram and generates an image in 16 seconds for sde karras 30 steps. Sigh, I thought this thread is about SDXL - forget about 1. But it works. The suggested --medvram I removed it when i upgraded from RTX2060-6GB to RTX4080-12GB (both Laptop/Mobile). and nothing was good ever again. You need to add --medvram or even --lowvram arguments to the webui-user. 0-RC , its taking only 7. get_blocks(). To enable higher-quality previews with TAESD, download the taesd_decoder. space도. You must be using cpu mode, on my rtx 3090, SDXL custom models take just over 8. Long story short, I had to add --disable-model. Effects not closely studied. set COMMANDLINE_ARGS= --medvram --upcast-sampling --no-half --precision full . SDXL on Ryzen 4700u (VEGA 7 IGPU) with 64GB Dram blue screens [Bug]: #215. そこで今回はコマンドライン引数「xformers」を使って、Stable Diffusionの動作を高速化する方法について解説します。. Disables the optimization above. 0. I would think 3080 10gig would be significantly faster, even with --medvram. My full args for A1111 SDXL are --xformers --autolaunch --medvram --no-half. --api --no-half-vae --xformers : batch size 1 - avg 12. These allow me to actually use 4x-UltraSharp to do 4x upscaling with Highres. Generated 1024x1024, Euler A, 20 steps. 5 minutes with Draw Things. add --medvram-sdxl flag that only enables --medvram for SDXL models prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) Minor: img2img batch: RAM savings, VRAM savings, . Start your invoke. 20 • gradio: 3. I'm generating pics at 1024x1024. . 1 until you like it. You can edit webui-user. 0 With sdxl_madebyollin_vae. 19--precision {full,autocast} 在这个精度下评估: evaluate at this precision: 20--shareTry setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. ago. So I researched and found another post that suggested downgrading Nvidia drivers to 531. --medvram or --lowvram and unloading the models (with the new option) don't solve the problem. 0. The SDXL works without it. Before SDXL came out I was generating 512x512 images on SD1. 0, the various. In my case SD 1. Smaller values than 32 will not work for SDXL training. bat file. On my PC I was able to output a 1024x1024 image in 52 seconds. I think the problem of slowness may be caused by not enough RAM (not VRAM) xPiNGx • 2 mo. I have my VAE selection in the settings set to. In my v1. -if I use --medvram or higher (no opt command for vram) I get blue screens and PC restarts-I upgraded AMD driver to latest (23-7-2) but it did not help. SDXL initial generation 1024x1024 is fine on 8GB of VRAM, even it's okay for 6GB of VRAM (using only base without refiner). If you’re unfamiliar with Stable Diffusion, here’s a brief overview:. Hit ENTER and you should see it quickly update your files. This will pull all the latest changes and update your local installation. 8: from 640x640 to 1280x1280 Without medvram it can only handle 640x640, which is half. 5: 7. I found on the old version some times a full system reboot helped stabilize the generation. UI. Sped up SDXL generation from 4 mins to 25 seconds!SDXL training. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. 1 / 4. 0 out of 5. The SDXL works without it. On a 3070TI with 8GB. 5 based models at 512x512 and upscaling the good ones. 提示编辑时间线具有单独的第一次通过和雇用修复通过(种子破坏更改)的范围(#12457) 次要的: img2img 批处理:img2img 批处理中的 RAM 节省、VRAM 节省、. --always-batch-cond-uncond: Disables the optimization above. Practice thousands of math and language arts skills at. Launching Web UI with arguments: --port 7862 --medvram --xformers --no-half --no-half-vae ControlNet v1. tif, . process_api( File "E:stable-diffusion-webuivenvlibsite. ここでは. I have tried rolling back the video card drivers to multiple different versions. 動作が速い. You should definitely try Draw Things if you are on Mac. Before jumping on automatic1111 fault, enable xformers optimization and/or medvram/lowram launch option and come back to say the same thing. Runs faster on ComfyUI but works on Automatic1111. medvram-sdxl and xformers didn't help me. Only makes sense together with --medvram or --lowvram. Open in notepad and do a Ctrl-F for "commandline_args". Side by side comparison with the original. On the plus side it's fairly easy to get linux up and running and the performance difference between using rocm and onnx is night and day. For example, you might be fine without --medvram for 512x768 but need the --medvram switch to use ControlNet on 768x768 outputs. Many of the new models are related to SDXL, with several models for Stable Diffusion 1. add --medvram-sdxl flag that only enables --medvram for SDXL models prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) Minor: img2img batch: RAM savings, VRAM savings, . ipinz changed the title [Feature Request]: [Feature Request]: "--no-half-vae-xl" on Aug 24. I have a RTX3070 8GB and A1111 SDXL works flawless with --medvram and. Whether comfy is better depends on how many steps in your workflow you want to automate. SDXL 1. At all. 1 to gather feedback from developers so we can build a robust base to support the extension ecosystem in the long run. Open 1 task done. 命令行参数 / 性能类. so decided to use SD1. bat file, 8GB is sadly a low end card when it comes to SDXL. • 8 mo. Introducing our latest YouTube video, where we unveil the official SDXL support for Automatic1111. Put the VAE in stable-diffusion-webuimodelsVAE. AUTOMATIC1111 版 WebUI Ver. 0 base, vae, and refiner models. yamfun. Unreserved. 6. IXL is here to help you grow, with immersive learning, insights into progress, and targeted recommendations for next steps. --network_train_unet_only option is highly recommended for SDXL LoRA. Not with A1111. add --medvram-sdxl flag that only enables --medvram for SDXL models; prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) Minor: img2img batch: RAM savings, VRAM savings, . 5 in about 11 seconds each. Next. You can go here and look through what each command line option does. OK, just downloaded the SDXL 1. 5. Cannot be used with --lowvram/Sequential CPU offloading. . 5 images take 40 seconds instead of 4 seconds. r/StableDiffusion. user. My computer black screens until I hard reset it. webui-user. Before SDXL came out I was generating 512x512 images on SD1. 0 model as well as the new Dreamshaper XL1. not SD. 1. the problem is when tried to do "hires fix" (not just upscale, but sampling it again, denoising and stuff, using K-Sampler) of that to higher resolution like FHD. that FHD target resolution is achievable on SD 1. using --lowvram sdxl can run with only 4GB VRAM, anyone? Slow progress but still acceptable, estimated 80 secs to completed. However, generation time is a tiny bit slower: about 1. will take this in consideration, sometimes i have too many tabs and possibly a video running in the back. It's a much bigger model. 9 model for Automatic1111 WebUI My card Geforce GTX 1070 8gb I use A1111. Shortest Rail Distance: 17 km. • 1 mo. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting r/StableDiffusion • SDXL 1. py bdist_wheel. Same problem. 6. Only makes sense together with --medvram or --lowvram--opt-channelslast: Changes torch memory type for stable diffusion to channels last. I was running into issues switching between models (I had the setting at 8 from using sd1. . Yes, less than a GB of VRAM usage. With this on, if one of the images fail the rest of the pictures are. docker compose --profile download up --build. To start running SDXL on a 6GB VRAM system using Comfy UI, follow these steps: How to install and use ComfyUI - Stable Diffusion. Don't need to turn on the switch. The place is in the webui-user. api Has caused the model. I have tried rolling back the video card drivers to multiple different versions. change default behavior for batching cond/uncond -- now it's on by default, and is disabled by an UI setting (Optimizatios -> Batch cond/uncond) - if you are on lowvram/medvram and are getting OOM exceptions, you will need to enable it ; show current position in queue and make it so that requests are processed in the order of arrival finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. 5 Models. depending on how complex I'm being) and am fine with that. The. Reply. py build python setup. I've been using this colab: nocrypt_colab_remastered. PLANET OF THE APES - Stable Diffusion Temporal Consistency. Zlippo • 11 days ago. 134 RuntimeError: mat1 and mat2 shapes cannot be multiplied (231x1024 and 768x320)It consuming like 5G vram at most time which is perfect but sometime it spikes to 5. 0, the various. 33 IT/S ~ 17. By the way, it occasionally used all 32G of RAM with several gigs of swap. 67 Daily Trains. The post just asked for the speed difference between having it on vs off. 6, and now I'm getting 1 minute renders, even faster on ComfyUI. As long as you aren't running SDXL in auto1111 (which is the worst way possible to run it), 8GB is more than enough to run SDXL with a few LoRA's. It takes 7 minutes for me to get 1024x1024 SDXL image with A1111 and 3. Yikes! Consumed 29/32 GB of RAM. 3 / 6. r/StableDiffusion. Workflow Duplication Issue Resolved: The team has resolved an issue where workflow items were being run twice for PRs from the repo. 업데이트되었는데요. There is also another argument that can help reduce CUDA memory errors, I used it when I had 8GB VRAM, you'll find these launch arguments at the github page of A1111. I applied these changes ,but it is still the same problem. Since SDXL came out I think I spent more time testing and tweaking my workflow than actually generating images. It takes around 18-20 sec for me using Xformers and A111 with a 3070 8GB and 16 GB ram. bat 打開讓它跑,應該要跑好一陣子。 2. I run it on a 2060, relatively easily (with -medvram). Don't forget to change how many images are stored in memory to 1. But yeah, it's not great compared to nVidia. The default installation includes a fast latent preview method that's low-resolution. 최근 스테이블 디퓨전이. I have the same issue, got an Arc A770 too so i guess the card is the problem. Image by Jim Clyde Monge. All. 好了以後儲存,然後點兩下 webui-user. fix resize 1. 0 A1111 vs ComfyUI 6gb vram, thoughts. At the end it says "CUDA out of memory" which I don't know if. But it has the negative side effect of making 1. I had to set --no-half-vae to eliminate errors and --medvram to get any upscalers other than latent to work, have not tested them all, only LDSR and R-ESRGAN 4X+. If I do img2img using the dimensions 1536x2432 (what I've previously been able to do) I get Tried to allocate 42. Video Summary: In this video, we'll dive into the world of automatic1111 and the official SDXL support. 5 secsIt also has a memory leak, but with --medvram I can go on and on. 5. Next with SDXL Model/ WindowsIf still not fixed, use command line arguments --precision full --no-half at a significant increase in VRAM usage, which may require --medvram. g. 0 Alpha 2, and the colab always crashes. Sign up for free to join this conversation on GitHub . #stablediffusion #A1111 #AI #Lora #koyass #sd #sdxl #refiner #art #lowvram #lora This video introduces how A1111 can be updated to use SDXL 1. My hardware is Asus ROG Zephyrus G15 GA503RM with 40GB RAM DDR5-4800, two M. First Impression / Test Making images with SDXL with the same Settings (size/steps/Sampler, no highres. For a while, the download will run as follows, so wait until it is complete: 1. 0 on automatic1111, but about 80% of the time I do, I get this error: RuntimeError: The size of tensor a (1024) must match the size of tensor b (2048) at non-singleton dimension 1. . Things seems easier for me with automatic1111. 6 and the --medvram-sdxl Image size: 832x1216, upscale by 2 DPM++ 2M, DPM++ 2M SDE Heun Exponential (these are just my usuals, but I have tried others) Sampling steps: 25-30 Hires. You can make it at a smaller res and upscale in extras though. However, when the progress is already 100%, suddenly VRAM consumption jumps to almost 100%, only 200-150Mb is left free. version: v1. Both GUIs do the same thing. It seems like the actual working of the UI part then runs on CPU only. 手順2:Stable Diffusion XLのモデルをダウンロードする. sh (Linux): set VENV_DIR allows you to chooser the directory for the virtual environment. I can generate 1024x1024 in A1111 in under 15 seconds, and using ComfyUI it takes less than 10 seconds. Hello everyone, my PC currently has a 4060 (the 8GB one) and 16GB of RAM. 0 base model. ComfyUIでSDXLを動かす方法まとめ. tiff in img2img batch (#12120, #12514, #12515) postprocessing/extras: RAM savings without --medvram (but with xformers) my system was using ~10GB VRAM using SDXL. 576 pixels (1024x1024 or any other combination). Well dang I guess. A1111 is easier and gives you more control of the workflow. It's definitely possible. The usage is almost the same as fine_tune. I have always wanted to try SDXL, so when it was released I loaded it up and surprise, 4-6 mins each image at about 11s/it. ComfyUI races through this, but haven't gone under 1m 28s in A1111. Generate an image as you normally with the SDXL v1. After that SDXL stopped all problems, load time of model around 30sec Reply reply Perspective-CarelessDisabling "Checkpoints to cache in RAM" lets the SDXL checkpoint load much faster and not use a ton of system RAM. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. The solution was described by user ArDiouscuros and as mentioned by nguyenkm should work by just adding the two lines in the Automattic1111 install. @weajus reported that --medvram-sdxl resolves the issue, however this is not due to the usage of the parameter, but due to the optimized way A1111 now manages system RAM, therefore not running into the issue 2) any longer. During image generation the resource monitor shows that ~7Gb VRAM is free (or 3-3. 1: 6. . get_blocks(). Because SDXL has two text encoders, the result of the training will be unexpected. 5 and 2. If it is the hi-res fix option, the second image subject repetition is definitely caused by a too high "Denoising strength" option. On GTX 10XX and 16XX cards makes generations 2 times faster. safetensors. 6. SDXL on Ryzen 4700u (VEGA 7 IGPU) with 64GB Dram blue screens [Bug]: #215. Also, as counterintuitive as it might seem,. 9vae. Works without errors every time, just takes too damn long. If you have 4 GB VRAM and want to make images larger than 512x512 with --medvram, use --lowvram --opt-split-attention. 1girl, solo, looking at viewer, light smile, medium breasts, purple eyes, sunglasses, upper body, eyewear on head, white shirt, (black cape:1. 5x. 5 model is that SDXL is much slower, and uses up more VRAM and RAM. 手順1:ComfyUIをインストールする. It's certainly good enough for my production work. 400 is developed for webui beyond 1. x) and taesdxl_decoder. safetensors generation takes 9sec longer, Reply replyWith medvram Composition is usually better woth sdxl, but many finetunes are trained at higher res which reduced the advantage for me. Only things I have changed are: --medvram (wich shouldn´t speed up generations afaik) and I installed the new refiner extension (really don´t see how that should influence rendertime as I haven´t even used it because it ran fine with dreamshaper when I restarted it. You need to use --medvram (or even --lowvram) and perhaps even --xformers arguments on 8GB. Watch on Download and Install. 0 on 8GB VRAM? Automatic1111 & ComfyUi. Okay so there should be a file called launch. Things seems easier for me with automatic1111. . Commandline arguments: Nvidia (12gb+) --xformers Nvidia (8gb) --medvram-sdxl --xformers Nvidia (4gb) --lowvram --xformers AMD (4gb) --lowvram --opt-sub-quad. For most optimum result, choose 1024 * 1024 px images For most optimum result, choose 1024 * 1024 px images If still not fixed, use command line arguments --precision full --no-half at a significant increase in VRAM usage, which may require --medvram. . set COMMANDLINE_ARGS=--medvram set. 6. And, I didn't bother with a clean install. Google Colab/Kaggle terminates the session due to running out of RAM #11836. bat` Beta Was this translation helpful? Give feedback. 1. 5), switching to 0 fixed that and dropped ram consumption from 30gb to 2. . sdxl を動かす!Running without --medvram and am not noticing an increase in used RAM on my system, so it could be the way that the system is transferring data back and forth between system RAM and vRAM, and is failing to clear out the ram as it goes. You can make it at a smaller res and upscale in extras though. ダウンロード. 5 models). I think it fixes at least some of the issues. 2. 0. 400 is developed for webui beyond 1. 0. So at the moment there is probably no way around --medvram if you're below 12GB. So I've played around with SDXL and despite the good results out of the box, I just can't deal with the computation times (3060 12GB): With 1. Both the doctor and the nurse were excellent. Ok sure, if it works for you then its good, I just also mean for anything pre SDXL like 1. set COMMANDLINE_ARGS=--xformers --medvram. safetensors at the end, for auto-detection when using the sdxl model. 8 / 3. I have a weird config where I have both Vladmandic and A1111 installed and use the A1111 folder for everything, creating symbolic links for. Medvram actually slows down image generation, by breaking up the necessary vram into smaller chunks. 6. I collected top tips&tricks for SDXL at this moment r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. The company says SDXL produces more detailed imagery and composition than its predecessor Stable Diffusion 2. add --medvram-sdxl flag that only enables --medvram for SDXL models; prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) Minor: img2img batch: RAM savings, VRAM savings, . You can also try --lowvram, but the effect may be minimal. 9. Video Summary: In this video, we'll dive into the world of automatic1111 and the official SDXL support. TencentARC released their T2I adapters for SDXL. With. Use --disable-nan-check commandline argument to disable this check. I have the same GPU, 32gb ram and i9-9900k, but it takes about 2 minutes per image on SDXL with A1111. 0 A1111 in any of the windows or Linux shell/bat files there is no --medvram or --medvram-sdxl setting used. ComfyUIでSDXLを動かす方法まとめ. 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. See Reviews. Training scripts for SDXL. 1. for sdxl, choose which part of prompt goes to second text encoder - just add TE2: separator in the prompt for hires and refiner, second pass prompt is used if present, otherwise primary prompt is used new option in settings -> diffusers -> sdxl pooled embeds thanks @AI-Casanova; better Hires support for SD and SDXLYou really need to use --medvram or --lowvram to just make it load on anything lower than 10GB in A1111. You dont need low or medvram. Beta Was this translation helpful? Give feedback. Specs n numbers: Nvidia RTX 2070 (8GiB VRAM). 60 から Refiner の扱いが変更になりました。. This opens up new possibilities for generating diverse and high-quality images. 2 / 4. either add --medvram to your webui-user file in the command line args section (this will pretty drastically slow it down but get rid of those errors) OR. r/StableDiffusion. For 8GB vram, the recommended cmd flag is "--medvram-sdxl". 그림의 퀄리티는 더 높아졌을지. 5 model batches of 4 in about 30 seconds (33% faster) Sdxl model load in about a minute, maxed out at 30 GB sys ram. I run on an 8gb card with 16gb of ram and I see 800 seconds PLUS when doing 2k upscales with SDXL, wheras to do the same thing with 1. Contraindicated. Idk why a1111 si so slow and don't work, maybe something with "VAE", idk. Hey, just wanted some opinions on SDXL models. Hello, I tried various LoRAs trained on SDXL 1. Welcome to /r/hoggit, a noob-friendly community for fans of high-fidelity combat flight simulation.