You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now when I checked the provider and device information using below code:
import onnxruntime as ort
providers = ort.get_available_providers()
print("Available providers:", providers)
print("Device:", ort.get_device())
Output:-
Available providers: ['AzureExecutionProvider', 'CPUExecutionProvider']
Device: CPU
'DmlExecutionProvider' is missing after the olive-ai installation.
Is this DmlExecutionProvider missing is already known issue or am I doing something wrong?
Can I use integrated Radeon 780M GPU of my machine for inferencing using DirectML or not? If I can use, please let me know the steps to follow?
I was able to follow the Olive documentation for CPU and perform inferencing using Olive. But I would like to explore DirectML capabilities of my device as well for inferencing.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am using a windows laptop with processor AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics and 32 GB RAM with DirectX support.
I created a new conda environment and activated the environment using below commands:
conda create -n olive-directml python=3.12
conda activate olive-directml
My python version installed is 3.12.9
First I tried to check whether directml support is present for my device by installing onnxruntime-directml using below command:
pip install onnxruntime-directml
onnxruntime-directml version installed: 1.20.1
I checked the provider and device information using below code:
import onnxruntime as ort
providers = ort.get_available_providers()
print("Available providers:", providers)
print("Device:", ort.get_device())
Output:-
Available providers: ['DmlExecutionProvider', 'CPUExecutionProvider']
Device: CPU-DML
Next when I tried to follow github documentation for olive, and setup olive for Windows DirectML.
https://microsoft.github.io/Olive/getting-started/getting-started.html
I had installed the command:
pip install olive-ai[directml,finetune]
onnxruntime-directml version installed: 1.20.1
Now when I checked the provider and device information using below code:
import onnxruntime as ort
providers = ort.get_available_providers()
print("Available providers:", providers)
print("Device:", ort.get_device())
Output:-
Available providers: ['AzureExecutionProvider', 'CPUExecutionProvider']
Device: CPU
'DmlExecutionProvider' is missing after the olive-ai installation.
Is this DmlExecutionProvider missing is already known issue or am I doing something wrong?
Can I use integrated Radeon 780M GPU of my machine for inferencing using DirectML or not? If I can use, please let me know the steps to follow?
I was able to follow the Olive documentation for CPU and perform inferencing using Olive. But I would like to explore DirectML capabilities of my device as well for inferencing.
Beta Was this translation helpful? Give feedback.
All reactions