microsoft/DirectML
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
C++MIT
Issues
- 1
torch-directml : ImportError: cannot import name 'autocast' from 'torch.amp' (unknown location)
#648 opened by Berowne - 1
- 1
TypeError: 'staticmethod' object is not callable
#644 opened by pin24 - 0
DirectML run fail on some cases.
#645 opened by DeruVN - 1
- 0
Resample: Axis direction
#643 opened by schm0 - 3
- 5
torch.lstm raised an error with backend
#613 opened by Mithzyl - 1
- 4
DirectMLNpuInference fails to run on the intel NPU
#625 opened by Lucashien - 1
Failed to use --fp16 and --use_dml_attn simultaneously in Whisper pytorch-directml
#598 opened by XciciciX - 0
The operator 'aten::native_dropout_backward' is not currently supported on the DML backend
#630 opened by maikelsz - 4
Compatibility issues with NumPy 2.0.0
#608 opened by GuangChen2333 - 0
- 1
Operator aten::upsample_bicubic2d.out
#596 opened by DeruVN - 1
torch.nn.DataParallel(net).to(dml) raised an error
#609 opened by etoyz - 1
- 0
ImportError: */torch_directml_native.cpython-312-x86_64-linux-gnu.so: undefined symbol *
#627 opened by yusufalma - 0
- 1
NPU dmlDevice not loading
#589 opened by alex2060 - 0
- 0
- 0
UnicodeDecodeError (utf-8 codec can't decode byte 0xcf) when using torch.uint8 in torch_directml
#619 opened by VadimShabashov - 0
Memory leak in DirectMLNpuInference sample
#618 opened by WTian-Yu - 0
Microsoft.ML.OnnxRuntime.DirectML and Microsoft.AI.DirectML C++ API got incorrect mask output(detectron2 Mask r-cnn Model) when using GPU
#616 opened by YOODS-Xu - 3
Outputs of masks are diffrent between onnxruntime and onnxruntime-directml, from onnxruntime-directml==1.15, when using detectron2 Mask r-cnn Model.
#614 opened by YOODS-Xu - 0
Python build fails to compile
#615 opened by daniellivingston - 2
Replace model in DirectMLNpuInference sample: The specified device interface or feature level is not supported on this system
#611 opened by WTian-Yu - 0
- 1
Onnxruntime compiling stuck , cannot exit.
#607 opened by Jay19751103 - 2
UserWarning: The operator 'aten::_foreach_lerp_.Scalar' is not currently supported on the DML backend
#604 opened by a36624705 - 0
Add support for llama.cpp
#603 opened by qxrbu - 0
- 1
Intel NPU
#582 opened by DavidMartinezGonzalez - 0
torch directml bug
#597 opened by xalteropsx - 2
future plans for torch-directml
#569 opened by kovtcharov - 2
RuntimeError: Could not allocate tensor with 234881024 bytes. There is not enough GPU video memory available!
#579 opened by HungYiChen - 4
UserWarning: Set seed for `privateuseone` device does not take effect..
#587 opened by jovanovic-milos - 3
Hi, please help on aten::softplus.out
#586 opened by DeruVN - 2
Torch-DirectML Layer Norm Produces Incorrect Result with Non-contiguous Input
#588 opened by NullSenseStudio - 1
- 1
UserWarning: The operator 'aten::native_dropout' is not currently supported on the DML backend
#592 opened by trevorjwood - 0
cbuffers with arrays and using dxdispatch.
#594 opened by vmadananth - 3
why DirectML isn't being further promoted
#575 opened by muyu66 - 1
use transformers RuntimeError: tensor.device().type() == at::DeviceType::PrivateUse1 INTERNAL ASSERT FAILED at
#578 opened by poo0054 - 2
How to Install DirectML on Windows?
#568 opened by TechVillain - 0
How to install torch-directml(cpuonly) on linux?
#570 opened by ZqinKing - 0
NPU support in DirectML want to deploy yolov8n
#567 opened by GamePP - 0
Unable to eun float16 onnx model only float32
#566 opened by ani-mal - 0
DirectML in UWP apps?
#563 opened by emaschino