Hi Haanjack,
Thank you for this repo.
I was trying using your code for the inference openpose, it runs fine with the nvidia docker image 18.04, but when I am trying using the nvidia docker image 19.01, it gives the segmentation fault at the following location :
context->enqueue(gParams.batchSize, &buffers[0], stream, nullptr);
Building and running a GPU inference engine for OpenPose, N=1...
Input "image": 3x656x368
Output "net_output": 78x82x46
Run inference...
name=image, bindingIndex=0, buffers.size()=2
name=net_output, bindingIndex=1, buffers.size()=2
Segmentation fault (core dumped)
I tried investigating the issue, and found, the function:
int PReLUPlugin::enqueue(int batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream) is never called
I also checked the size of allocated buffer[0] and stream, which are successfully allocated, but no clue why segmentation fault?
Would you be giving me some debugging clues solving this issue.
Kind Regards
Arun
Hi Haanjack,
Thank you for this repo.
I was trying using your code for the inference openpose, it runs fine with the nvidia docker image 18.04, but when I am trying using the nvidia docker image 19.01, it gives the segmentation fault at the following location :
context->enqueue(gParams.batchSize, &buffers[0], stream, nullptr);I tried investigating the issue, and found, the function:
int PReLUPlugin::enqueue(int batchSize, const void *const *inputs, void **outputs, void *workspace, cudaStream_t stream)is never calledI also checked the size of allocated buffer[0] and stream, which are successfully allocated, but no clue why segmentation fault?
Would you be giving me some debugging clues solving this issue.
Kind Regards
Arun