Skip to content

rtx 5060 ti + stable diffusion webui forge = bad time #1463

@sean1138

Description

@sean1138

What happened?

i barely know what i'm doing when it comes to more technical things and python and stability matrix made things really easy... when i was using an rtx 3060.

i figured out how to "fix" this sm_120 issue

A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\torch\cuda\__init__.py:209: UserWarning: 

NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.

The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.

If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

---

pytorch version: 2.3.1+cu121

by updating to cu129 in the forge venv folder. my nvidia control panel says i have 12.9.90 nvcuda64.dll.

after doing that adetailer fails/errors with

*** Error running postprocess_image: A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI Forge\extensions\adetailer\scripts\!adetailer.py
    Traceback (most recent call last):
      File "A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI Forge\modules\scripts.py", line 940, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI Forge\extensions\adetailer\aaaaaa\traceback.py", line 173, in wrapper
        raise error from None
    NotImplementedError: 
    ┌─────────────────────────────────────────────────────────────────────────┐
    │                               System info                               │
    │ ┌─────────────┬───────────────────────────────────────────────────────┐ │
    │ │            m │ Value                                                 │ │
    │ ├─────────────┼───────────────────────────────────────────────────────┤ │
    │ │    Platform │ Windows-10-10.0.26100-SP0                             │ │
    │ │      Python │ 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023,          │ │
    │ │             │ 00:38:17) [MSC v.1929 64 bit (AMD64)]                 │ │
    │ │     Version │ f2.0.1v1.10.1-previous-669-gdfdcbab6                  │ │
    │ │      Commit │ dfdcbab685e57677014f05a3309b48cc87383167              │ │
    │ │ Commandline │ ['A:\\_AI-ML\\stability-matrix\\Data\\Packages\\Stab… │ │
    │ │             │ Diffusion WebUI Forge\\launch.py',                    │ │
    │ │             │ '--pin-shared-memory', '--cuda-malloc',               │ │
    │ │             │ '--cuda-stream', '--skip-install', '--theme', 'dark', │ │
    │ │             │ '--listen', '--enable-insecure-extension-access',     │ │
    │ │             │ '--gradio-allowed-path',                              │ │
    │ │             │ 'A:\\_AI-ML\\stability-matrix\\Data\\Images']         │ │
    │ │   Libraries │ {'torch': '2.9.0+cu129', 'torchvision'm: '0.24.0',     │ │
    │ │             │ 'ultralytics': '8.3.91', 'mediapipe': '0.10.11'}      │ │
    │ └─────────────┴───────────────────────────────────────────────────────┘ │
    │                                 Inputs                                  │
    │ ┌─────────────────┬───────────────────────────────────────────────────┐ │
    │ │                 │ Value                                             │ │
    │ ├─────────────────┼───────────────────────────────────────────────────┤ │
    │ │          prompt │ professional modelling photograph of woman fran,  │ │
    │ │                 │ full body shot, fun,                              │ │
    │ │                 │                                                   │ │
    │ │                 │ <lora:Fran:1>                                     │ │
    │ │ negative_prompt │ overweight, ugly, fake, plastic, implants, shiny, │ │
    │ │                 │ (makeup:1.5), jewelry, anorexic, emaciated,       │ │
    │ │                 │ boring, bored,                                    │ │
    │ │                 │ (expressionless, empty_eyes,:0.5)                 │ │
    │ │ m         n_iter │ 1                                                 │ │
    │ │      batch_size │ 1                                                 │ │
    │ │           width │ 928                                               │ │
    │ │          height │ 1160                                              │ │
    │ │    sampler_name │ DPM++ 2M SDE                                      │ │
    │ │       enable_hr │ False                                             │ │
    │ │     hr_upscaler │ Latent                                            │ │
    │ │      checkpoint │ sd\doesntmatter.safetensors           │ │
    │ │             vae │ Automatic                                         │ │
    │ │            unet │ Automatic                                         │ │
    │ └─────────────────┴───────────────────────────────────────────────────┘ │
    │                 ADetailer                                               │
    │ ┌─────────────────────┬─────────────────┐                               │
    │ │                     │ Value           │                               │
    │ ├─────────────────────┼─────────────────┤                               │
    │ │             version │ 25.3.0          │                               │
    │ │            ad_model │ face_yolov8s.pt │                               │
    │ │           ad_prompt │                 │                               │
    │ │  ad_negative_prompt │                 │                               │
    │ │ ad_controlnet_model │ None            │                               │
    │ │              is_api │ False           │                               │
    │ └─────────────────────┴─────────────────┘                               │
    │ ┌───────────────── Traceback (most recent call last) ─────────────────┐ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\extensions\adetailer\aaaaaa\traceback.py:153 in wrapper       │ │
    │ │                                                                     │ │
    │ │   152 │   │   try:                                                  │ │
    │ │ > 153 │   │   │   return func(*args, **kwargs)                      │ │
    │ │   154 │   │   except Exception as e:                                │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\extensions\adetailer\scripts\!adetailer.py:916 in             │ │
    │ │ postprocess_image                                                   │ │
    │ │                                                                     │ │
    │ │    915 │   │   │   │   │   continue                                 │ │
    │ │ >  916 │   │   │   │   is_processed |= self._postprocess_image_inne │ │
    │ │    917                                                              │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\extensions\adetailer\scripts\!adetailer.py:830 in             │ │
    │ │ _postprocess_image_inner                                            │ │
    │ │                                                                     │ │
    │ │    829 │   │   │   with disable_safe_unpickle():                    │ │
    │ │ >  830 │   │   │   │   pred = ultralytics_predict(                  │ │
    │ │    831 │   │   │   │   │   ad_model,                                │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\extensions\adetailer\adetailer\ultralytics.py:29 in           │ │
    │ │ ultralytics_predict                                                 │ │
    │ │                                                                     │ │
    │ │   28 │   apply_classes(model, model_path, classes)                  │ │
    │ │ > 29 │   pred = model(image, conf=confidence, device=device)        │ │
    │ │   30                                                                │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\engine\model.py:182 in     │ │
    │ │ __call__                                                            │ │
    │ │                                                                     │ │
    │ │    181 │   │   """                                                  │ │
    │ │ >  182 │   │   return self.predict(source, stream, **kwargs)        │ │
    │ │    183                                                              │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\engine\model.py:550 in     │ │
    │ │ predict                                                             │ │
    │ │                                                                     │ │
    │ │    549 │   │   │   self.predictor.set_prompts(prompts)              │ │
    │ │ >  550 │   │   return self.predictor.predict_cli(source=source) if  │ │
    │ │    551                                                              │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\engine\predictor.py:214 in │ │
    │ │ __call__                                                            │ │
    │ │                                                                     │ │
    │ │   213 │   │   else:                                                 │ │
    │ │ > 214 │   │   │   return list(self.stream_inference(source, model,  │ │
    │ │   215                                                               │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\torch\utils\_contextlib.py:38 in       │ │
    │ │ generator_context                                                   │ │
    │ │                                                                     │ │
    │ │    37 │   │   │   with ctx_factory():                               │ │
    │ │ >  38 │   │   │   │   response = gen.send(None)                     │ │
    │ │    39                                                               │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\engine\predictor.py:330 in │ │
    │ │ stream_inference                                                    │ │
    │ │                                                                     │ │
    │ │   329 │   │   │   │   with profilers[2]:                            │ │
    │ │ > 330 │   │   │   │   │   self.results = self.postprocess(preds, im │ │
    │ │   331 │   │   │   │   self.run_callbacks("on_predict_postprocess_en │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\models\yolo\detect\predict │ │
    │ │ .py:35 in postprocess                                               │ │
    │ │                                                                     │ │
    │ │   34 │   │   """Post-processes predictions and returns a list of Re │ │
    │ │ > 35 │   │   preds = ops.non_max_suppression(                       │ │
    │ │   36 │   │   │   preds,                                             │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\ultralytics\utils\ops.py:312 in        │ │
    │ │ non_max_suppression                                                 │ │
    │ │                                                                     │ │
    │ │   311 │   │   │   mboxes = x[:, :4] + c  # boxes (offset by class)   │ │
    │ │ > 312 │   │   │   i = torchvision.ops.nms(boxes, scores, iou_thres) │ │
    │ │   313 │   │   i = i[:max_det]  # limit detections                   │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\torchvision\ops\boxes.py:48 in nms     │ │
    │ │                                                                     │ │
    │ │    47 │   _assert_has_ops()                                         │ │
    │ │ >  48 │   return torch.ops.torchvision.nms(boxes, scores, iou_thres │ │
    │ │    49                                                               │ │
    │ │                                                                     │ │
    │ │ A:\_AI-ML\stability-matrix\Data\Packages\Stable Diffusion WebUI     │ │
    │ │ Forge\venv\lib\site-packages\torch\_ops.py:1255 in __call__         │ │
    │ │                                                                     │ │
    │ │   1254 │   │   │   return _call_overload_packet_from_python(self, * │ │
    │ │ > 1255 │   │   return self._op(*args, **kwargs)                     │ │
    │ │   1256                                                              m│ │
    │ └─────────────────────────────────────────────────────────────────────┘ │
    │ NotImplementedError: Could not run 'torchvision::nms' with arguments    │
    │ from the 'CUDA' backend. This could be because the operator doesn't     │
    │ exist for this backend, or was omitted during the selective/custom      │
    │ build process (if using custom build). If you are a Facebook employee   │
    │ using PyTorch on mobile, please visit https://fburl.com/ptmfixes for    │
    │ possible resolutions. 'torchvision::nms' is only available for these    │
    │ backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python,              │
    │ FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate,         │
    │ Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU,      │
    │ AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU,       │
    │ AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer,         │
    │ AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS,      │
    │ AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, │
    │ Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot,             │
    │ FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].         │
    │                                                                         │
    │ CPU: registered at                                                      │
    │ C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\o │
    │ ps\cpu\nms_kernel.cpp:116 [kernel]                                      │
    │ Meta: registered at A:\_AI-ML\stability-matrix\Data\Packages\Stable     │
    │ Diffusion WebUI Forge\venv\lib\site-packages\torch\library.py:1059      │
    │ [kernel]                                                                │
    │ QuantizedCPU: registered at                                             │
    │ C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\o │
    │ ps\quantized\cpu\qnms_kernel.cpp:128 [kernel]                           │
    │ BackendSelect: fallthrough registered at                                │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Back │
    │ endSelectFallbackKernel.cpp:3 [backend fallback]                        │
    │ Python: registered at                                                   │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Pyth │
    │ onFallbackKernel.cpp:194 [backend fallback]                             │
    │ FuncTorchDynamicLayerBackMode: registered at                            │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \DynamicLayer.cpp:479 [backend fallback]                                │
    │ Functionalize: registered at                                            │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\Functiona │
    │ lizeFallbackKernel.cpp:387 [backend fallback]                           │
    │ Named: registered at                                                    │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Name │
    │ dRegistrations.cpp:7 [backend fallback]                                 │
    │ Conjugate: registered at                                                │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\Conjugate │
    │ Fallback.cpp:17 [backend fallback]                                      │
    │ Negative: registered at                                                 │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\native\Ne │
    │ gateFallback.cpp:18 [backend fallback]                                  │
    │ ZeroTensor: registered at                                               │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\ZeroTenso │
    │ rFallback.cpp:115 [backend fallback]                                    │
    │ ADInplaceOrView: fallthrough registered at                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:104 [backend fallback]                           │
    │ AutogradOther: registered at                                            │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:63 [backend fallback]                            │
    │ AutogradCPU: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:67 [backend fallback]                            │
    │ AutogradCUDA: registered at                                             │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:75 [backend fallback]                            │
    │ AutogradXLA: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:87 [mbackend fallback]                            │
    │ AutogradMPS: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:95 [backend fallback]                            │
    │ AutogradXPU: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:71 [backend fallback]                            │
    │ AutogradHPU: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:108 [backend fallback]                           │
    │ AutogradLazy: registered at                                             │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:91 [backend fallback]                            │
    │ AutogradMTIA: registered at                                             │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:79 [backend fallback]                            │
    │ AutogradMAIA: registered at                                             │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:83 [backend fallback]                            │
    │ AutogradMeta: registered at                                             │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Vari │
    │ ableFallbackKernel.cpp:99 [backend fallback]                            │
    │ Tracer: registered at                                                   │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\torch\csrc\autograd\Tra │
    │ ceTypeManual.cpp:294 [backend fallback]                                 │
    │ AutocastCPU: registered at                                              │
    │ C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\o │
    │ ps\autocast\nms_kernel.cpp:34 [kernel]                                  │
    │ AutocastMTIA: fallthrough registered at                                 │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\autocast_ │
    │ mode.cpp:468 [backend fallback]                                         │
    │ AutocastMAIA: fallthrough registered at                                 │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\autocast_ │
    │ mode.cpp:m506 [backend fallback]                                         │
    │ AutocastXPU: registered at                                              │
    │ C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\o │
    │ ps\autocast\nms_kernel.cpp:41 [kernel]                                  │
    │ AutocastMPS: fallthrough registered at                                  │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\autocast_ │
    │ mode.cpp:209 [backend fallback]                                         │
    │ AutocastCUDA: registered at                                             │
    │ C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\o │
    │ ps\autocast\nms_kernel.cpp:27 [kernel]                                  │
    │ FuncTorchBatched: registered at                                         │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \LegacyBatchingRegistrations.cpp:731 [backend fallback]                 │
    │ BatchedNestedTensor: registered at                                      │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \LegacyBatchingRegistrations.cpp:758 [backend fallback]                 │
    │ FuncTorchVmapMode: fallthrough registered at                            │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \VmapModeRegistrations.cpp:27 [backend fallback]                        │
    │ Batched: registered at                                                  │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\LegacyBat │
    │ chingRegistrations.cpp:1075 [backend fallback]                          │
    │ VmapMode: fallthrough registered at                                     │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\VmapModeR │
    │ egistrations.cpp:33 [backend fallback]                                  │
    │ FuncTorchGradWrapper: registered at                                     │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \TensorWrapper.cpp:210 [backend fallback]                               │
    │ PythonTLSSnapshot: registered at                                        │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Pyth │
    │ onFallbackKernel.cpp:202 [backend fallback]                             │
    │ FuncTorchDynamicLayerFrontMode: registered at                           │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch │
    │ \DynamicLayer.cpp:475 [backend fallback]                                │
    │ PreDispatch: registered at                                              │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Pyth │
    │ onFallbackKernel.cpp:206 [backend fallback]                             │
    │ PythonDispatcher: registered at                                         │
    │ C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\core\Pyth │
    │ onFallbackKernel.cpp:198 [backend fallback]                             │
    │                                                                         │
    └─────────────────────────────────────────────────────────────────────────┘


---

am i still solidly in user error territory or is this something that needs to be fixed?

Steps to reproduce

No response

Relevant logs

Version

2.15.4

What Operating System are you using?

Windows

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions