You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running an onnx model useing CoreML. if I run this model in cargo I can see it's using gpu well, it spends about 40s to infer the model.
But If I build this model into Mac APP using cargo swfit crate, only cpu is used and it spends about 2min to infer the model. And also I try to infer the model using python onnxruntime, it spend about 10s.
this is a snapshot when I run this model in Mac APP.
Any Idea that I can find why this happen or I can I solve this issue ?
The text was updated successfully, but these errors were encountered:
Hi there,
I'm running an onnx model useing CoreML. if I run this model in cargo I can see it's using gpu well, it spends about 40s to infer the model.
But If I build this model into Mac APP using cargo swfit crate, only cpu is used and it spends about 2min to infer the model. And also I try to infer the model using python onnxruntime, it spend about 10s.
this is a snapshot when I run this model in Mac APP.
Any Idea that I can find why this happen or I can I solve this issue ?
The text was updated successfully, but these errors were encountered: