You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempting to run inference using the ORT C++ API (TensorRT EP). The equivalent Python pipeline works, but for some reason the retrieved input name does not match the actual ONNX model's input name (input_1) - as verified by using Netron.
terminate called after throwing an instance of 'Ort::Exception'
what(): Invalid input name: serving_default_input_1:0
To reproduce
// Get input count
size_t input_count = session.GetInputCount();
for (size_t i = 0; i < input_count; ++i) {
auto input_name_ptr = session.GetInputNameAllocated(i, allocator);
// std::string input_name = input_name_ptr.get();
input_names.push_back(input_name_ptr.get());
// std::cout << "Detected Input " << i << ": " << input_name << std::endl;
}
Urgency
ASAP
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.20
ONNX Runtime API
C++
Architecture
X86
Execution Provider
TensorRT
Execution Provider Library Version
TensorRT 10.0.0.6
The text was updated successfully, but these errors were encountered:
Describe the issue
Attempting to run inference using the ORT C++ API (TensorRT EP). The equivalent Python pipeline works, but for some reason the retrieved input name does not match the actual ONNX model's input name (
input_1
) - as verified by using Netron.To reproduce
Urgency
ASAP
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.20
ONNX Runtime API
C++
Architecture
X86
Execution Provider
TensorRT
Execution Provider Library Version
TensorRT 10.0.0.6
The text was updated successfully, but these errors were encountered: