Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,6 @@ Baremetal or Container (if so, version):

**Have you tried [the latest release](https://developer.nvidia.com/tensorrt)?**:

**Attach the captured .json and .bin files from [TensorRT's API Capture tool](https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/capture-replay.html) if you're on an x86_64 Unix system**

**Can this model run on other frameworks?** For example run ONNX model with ONNXRuntime (`polygraphy run <model.onnx> --onnxrt`):
3 changes: 3 additions & 0 deletions quickstart/IntroNotebooks/onnx_helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ def predict(self, batch): # result gets copied into output
err = cudart.cudaMemcpyAsync(
self.d_input, batch.ctypes.data, batch.nbytes, cudart.cudaMemcpyKind.cudaMemcpyHostToDevice, self.stream
)
err = err[0] if isinstance(err, tuple) else err
if err != cudart.cudaError_t.cudaSuccess:
raise RuntimeError(f"Failed to copy input to device: {cudart.cudaGetErrorString(err)}")

Expand All @@ -100,11 +101,13 @@ def predict(self, batch): # result gets copied into output
cudart.cudaMemcpyKind.cudaMemcpyDeviceToHost,
self.stream,
)
err = err[0] if isinstance(err, tuple) else err
if err != cudart.cudaError_t.cudaSuccess:
raise RuntimeError(f"Failed to copy output from device: {cudart.cudaGetErrorString(err)}")

# synchronize threads
err = cudart.cudaStreamSynchronize(self.stream)
err = err[0] if isinstance(err, tuple) else err
if err != cudart.cudaError_t.cudaSuccess:
raise RuntimeError(f"Failed to synchronize stream: {cudart.cudaGetErrorString(err)}")

Expand Down