Collected links and contacts for how to add ops to torch-mlir.
You will need to do 5 things:
- make sure -DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON is added during build. This is to enable the python file used in
build_tools/update_torch_ods.sh
andbuild_tools/update_abstract_interp_lib.sh
- make sure the op exists in
torch_ods_gen.py
, and then runbuild_tools/update_torch_ods.sh
, and then build. This generatesGeneratedTorchOps.td
, which is used to generate the cpp and h files where ops function signatures are defined.- Reference torch op registry
- make sure the op exists in
abstract_interp_lib_gen.py
, and then runbuild_tools/update_abstract_interp_lib.sh
, and then build. This generatesAbstractInterpLib.cpp
, which is used to generate the cpp and h files where ops function signatures are defined.- Reference torch shape functions
- write test cases. They live in
projects/pt1
. See the Dec 2023 example. - implement the op in one of the
lib/Conversion/TorchToLinalg/*.cpp
files
Reference Examples
- A Dec 2023 example with the most up to date lowering
- Chi's simple example of adding op lowering useful instructions and referring links for you to understand the op lowering pipeline in torch-mlir in the comments
Resources:
- how to set up torch-mlir
- torch-mlir doc on how to debug and test
- torch op registry
- torch shape functions
-
Generate the big folder of ONNX IR. Use this Python script. Alternatively, if you're trying to support a certain model, convert that model to onnx IR with
optimum-cli export onnx --model facebook/opt-125M fb-opt python -m torch_mlir.tools.import_onnx fb-opt/model.onnx -o fb-opt-125m.onnx.mlir
-
Find an instance of the Op that you're trying to implement inside the smoke tests folder or the generated model IR, and write a test case. Later you will save it to one of the files in
torch-mlir/test/Conversion/TorchOnnxToTorch
, but for now feel free to put it anywhere. -
Implement the op in
lib/Conversion/TorchOnnxToTorch/something.cpp
. -
Test the conversion by running
./build/bin/torch-mlir-opt -split-input-file -verify-diagnostics -convert-torch-onnx-to-torch your_mlir_file.mlir
. For more details, see the testing section of the doc on development. Xida usually creates a separate MLIR file to test it to his satisfaction before integrating it into one of the files attorch-mlir/test/Conversion/TorchOnnxToTorch
.
Helpful examples:
- Generate FILECHECK tests from MLIR test cases:
torch-mlir-opt -convert-<your conversion> /tmp/your_awesome_testcase.mlir | externals/llvm-project/mlir/utils/generate-test-checks.py
. Please don't just paste the generated tests - reference them to write your own
People who've worked on this for a while
- Vivek (@vivek97 on discord)
- Chi Liu
Recent Turbine Camp Attendees, from recent to less recent
- Xida Ren (@xida_ren on discord)
- Sungsoon Cho
- IMPORTANT: read the LLVM style guide
- Tutorials
- Sungsoon's Shark Getting Started Google Doc
- This document contains commands that would help you set up shark and run demos
- How to implement ONNX op lowering
- Sungsoon's Shark Getting Started Google Doc
- Examples
- A Dec 2023 example with the most up to date lowering
- Chi's Example Lowering
- Github issue and code detailing how to implement the lowring of an OP.
- Chi's simple example of adding op lowering useful instructions and referring links for you to understand the op lowering pipeline in torch-mlir in the comments
- If you have questions, reach out to Chi on Discord
- Vivek's example of ONNX op lowering
- Find Ops To Lower
- Torch MLIR + ONNX Unimplemented Ops on Sharepoint
- If you don't have access yet, request it.
- nod-ai/SHARK-Turbine ssues tracking op support
- Torch MLIR + ONNX Unimplemented Ops on Sharepoint
Xida: This is optional. If you're using VS code like me, you might want to set it up so you can use the jump to definition / references, auto fix, and other features.
Feel free to contact me on discord if you have trouble figuring this out.
You may need to write something like this into your
.vscode/settings.json
under torch-mlir
{
"files.associations": {
"*.inc": "cpp",
"ranges": "cpp",
"regex": "cpp",
"functional": "cpp",
"chrono": "cpp",
"__functional_03": "cpp",
"target": "cpp"
},
"cmake.sourceDirectory": ["/home/xida/torch-mlir/externals/llvm-project/llvm"],
"cmake.buildDirectory": "${workspaceFolder}/build",
"cmake.generator": "Ninja",
"cmake.configureArgs": [
"-DLLVM_ENABLE_PROJECTS=mlir",
"-DLLVM_EXTERNAL_PROJECTS=\"torch-mlir\"",
"-DLLVM_EXTERNAL_TORCH_MLIR_SOURCE_DIR=\"/home/xida/torch-mlir\"",
"-DCMAKE_BUILD_TYPE=Release",
"-DCMAKE_C_COMPILER_LAUNCHER=ccache",
"-DCMAKE_CXX_COMPILER_LAUNCHER=ccache",
"-DLLVM_ENABLE_PROJECTS=mlir",
"-DLLVM_EXTERNAL_PROJECTS=torch-mlir",
"-DLLVM_EXTERNAL_TORCH_MLIR_SOURCE_DIR=${workspaceFolder}",
"-DMLIR_ENABLE_BINDINGS_PYTHON=ON",
"-DLLVM_ENABLE_ASSERTIONS=ON",
"-DLLVM_TARGETS_TO_BUILD=host",
],
"C_Cpp.default.configurationProvider": "ms-vscode.cmake-tools",
"cmake.configureEnvironment": {
"PATH": "/home/xida/miniconda/envs/torch-mlir/bin:/home/xida/miniconda/condabin:/home/xida/miniconda/bin:/home/xida/miniconda/bin:/home/xida/miniconda/condabin:/home/xida/miniconda/bin:/home/xida/miniconda/bin:/home/xida/miniconda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
},
"cmake.cmakePath": "/home/xida/miniconda/envs/torch-mlir/bin/cmake", // make sure this is a cmake that knows where your python is
}
The important things to note are the cmake.configureArgs
, which specify the location of your torch mlir, and the cmake.sourceDirectory
, which indicates that CMAKE should not build from the current directory and should instead build from externals/llvm-project/llvm