Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terminate called after throwing an instance of 'Ort::Exception' what(): Invalid input name: serving_default_input_1:0 #23730

Open
zainali-AT opened this issue Feb 18, 2025 · 0 comments

Comments

@zainali-AT
Copy link

Describe the issue

Attempting to run inference using the ORT C++ API (TensorRT EP). The equivalent Python pipeline works, but for some reason the retrieved input name does not match the actual ONNX model's input name (input_1) - as verified by using Netron.

terminate called after throwing an instance of 'Ort::Exception'
  what():  Invalid input name: serving_default_input_1:0

To reproduce

// Get input count
    size_t input_count = session.GetInputCount();
    for (size_t i = 0; i < input_count; ++i) {
        auto input_name_ptr = session.GetInputNameAllocated(i, allocator);
        // std::string input_name = input_name_ptr.get();
        input_names.push_back(input_name_ptr.get());
        // std::cout << "Detected Input " << i << ": " << input_name << std::endl;
    }

Urgency

ASAP

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20

ONNX Runtime API

C++

Architecture

X86

Execution Provider

TensorRT

Execution Provider Library Version

TensorRT 10.0.0.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant