- General Purpose implementation (in the future abstraction for different NN plague and play pipelines will be implemented)
- Friendly for deployment in the industrial sector.
- Faster than OpenCV's DNN inference on both CPU and GPU.
- Supports FP32 and FP16 CUDA acceleration.
| Dependency | Version |
|---|---|
| Onnxruntime(linux,windows,macos) | >=1.14.1 |
| OpenCV | >=4.0.0 |
| C++ Standard | >=17 |
| Cmake | >=3.5 |
| Cuda (Optional) | >=12.8 |
| cuDNN (Cuda required) | =9 |
Note: The dependency on C++17 is due to the usage of the C++17 filesystem feature.
Note (2): Due to ONNX Runtime, we need to use CUDA 12(.8) and cuDNN 9. Keep in mind that this requirement might change in the future.
-
You can just use run install
console ./install.sh. For manual installation. -
Clone the repository to your local machine.
-
Navigate to the root directory of the repository.
-
Create a build directory and navigate to it SAM or YOLO:
mkdir build && cd build -
Run CMake to generate the build files:
cmake ..Notice:
If you encounter an error indicating that the
ONNXRUNTIME_ROOTvariable is not set correctly, you can resolve this by building the project using the appropriate command tailored to your system.# compiled in a win32 system cmake -D WIN32=TRUE .. # compiled in a linux system cmake -D LINUX=TRUE .. # compiled in an apple system cmake -D APPLE=TRUE ..
-
Build the project:
make -
The built executable should now be located in the
builddirectory.
./Yolov8OnnxRuntimeCPPInferenceNotice: Make sure you have an image to on the build/image folder.