|
1 | 1 | Installation Guide
|
2 | 2 | ==================
|
3 | 3 |
|
4 |
| -**Author**: `Priya Goyal <https://github.com/prigoyal>`_ |
| 4 | + .. note:: |
5 | 5 |
|
6 |
| -The following instructions are provided for developers who would like to |
7 |
| -experiment with the library. |
| 6 | + Since Tensor Comprehensions is still a research project in its infancy |
| 7 | + it is recommended to build the master from source. |
8 | 8 |
|
9 |
| -At the moment, only :code:`Ubuntu 14.04` and :code:`Ubuntu 16.04` configurations are |
10 |
| -officially supported. Additionally, we routinely run on a custom CentOS7 |
11 |
| -installation. If you are interested in running on non-Ubuntu configurations |
12 |
| -please reach out and we will do our best to assist you. Contributing back new |
13 |
| -docker configurations to provide a stable environment to build from source on |
14 |
| -new systems would be highly appreciated. Please read the :code:`docker/README.md` for how |
15 |
| -to build new docker images. |
| 9 | +Conda installation |
| 10 | +^^^^^^^^^^^^^^^^^^ |
| 11 | +You can install the latest released package for tensor comprehensions as follows: |
16 | 12 |
|
17 |
| -Some users might prefer building TC in :code:`non-conda` enviroment and some might prefer building in :code:`conda` enviroment. We provide installation instructions for both environments. |
| 13 | + .. code-block:: bash |
18 | 14 |
|
19 |
| -Further, we also provide runtime :code:`docker` images for both :code:`conda` and :code:`non-conda` environment and also an :code:`nvidia-docker` runtime image for TC to have access to GPUs. |
| 15 | + conda install -y -c pytorch -c tensorcomp tensor_comprehensions |
20 | 16 |
|
21 |
| -You can chose whatever build settings suit your requirements best and follow the instructions to build. Please feel free to contact us in case you need help with build. |
| 17 | +Build from source |
| 18 | +^^^^^^^^^^^^^^^^^ |
| 19 | + |
| 20 | + .. note:: |
| 21 | + |
| 22 | + **In order to uniformize and simplify the build system we had to make |
| 23 | + choices. TC is currently only officially supported on Ubuntu 16.04 with |
| 24 | + gcc 5.4.0 and takes most its dependencies from conda packages.** |
| 25 | + Other configurations may work too but are not yet officially supported. |
| 26 | + For more information about the setup we use to build the conda |
| 27 | + dependencies see the following `Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_. |
| 28 | + Our main goal with this decision is to make the build procedure |
| 29 | + extremely simple, both reproducible internally and extensible to new |
| 30 | + targets in the future. In particular the gcc-4 / gcc-5 ABI switch is |
| 31 | + not something we want to concern ourselves with at this point, we go |
| 32 | + for gcc-5.4.0. |
| 33 | + |
| 34 | +Prerequisites |
| 35 | +""""""""""""" |
| 36 | +Building TC from source requires gmp, cmake (v3.10 or higher), automake |
| 37 | +and libtool. It is generally a good idea to look at the |
| 38 | +`Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_ |
| 39 | +which shows the environment we use for testing and building our packages. |
| 40 | +To install on Ubuntu16.04: |
| 41 | + |
| 42 | + .. code-block:: bash |
| 43 | +
|
| 44 | + sudo apt-get install libgmp3-dev cmake automake libtool |
| 45 | +
|
| 46 | +Conda from scratch (first time configuration) |
| 47 | +""""""""""""""""""""""""""""""""""""""""""""" |
| 48 | +Choose and set an INSTALLATION_PATH then run the following: |
| 49 | + |
| 50 | + .. code-block:: bash |
| 51 | +
|
| 52 | + wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh -O anaconda.sh && \ |
| 53 | + chmod +x anaconda.sh && \ |
| 54 | + ./anaconda.sh -b -p ${INSTALLATION_PATH} && \ |
| 55 | + rm anaconda.sh |
| 56 | +
|
| 57 | + . ${INSTALLATION_PATH}/bin/activate |
| 58 | + conda update -y -n base conda |
| 59 | +
|
| 60 | +Create a new environment in which TC will be built and install core dependencies: |
| 61 | + |
| 62 | + .. code-block:: bash |
| 63 | +
|
| 64 | + conda create -y --name tc_build python=3.6 |
| 65 | + conda activate tc_build |
| 66 | + conda install -y pyyaml mkl-include pytest |
| 67 | + conda install -y -c nicolasvasilache llvm-tapir50 halide |
| 68 | +
|
| 69 | +Then install the PyTorch version that corresponds to your system binaries |
| 70 | +(e.g. cuda 9.0): |
| 71 | + |
| 72 | + .. code-block:: bash |
| 73 | +
|
| 74 | + conda install -y -c pytorch pytorch torchvision cuda90 |
| 75 | + conda remove -y cudatoolkit --force |
| 76 | +
|
| 77 | + .. note:: |
| 78 | + As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it |
| 79 | + pulls cudatoolkit. However cudatoolkit can never replace a system installation |
| 80 | + because it cannot package libcuda.so (which comes with the driver, not |
| 81 | + the toolkit). As a consequence cudatoolkit only contains redundant |
| 82 | + libraries and we remove it explicitly. In a near future, the unified |
| 83 | + PyTorch + Caffe2 build system will link everything statically and stop |
| 84 | + pulling the cudatoolkit dependency. |
| 85 | + |
| 86 | +Activate conda in your current terminal |
| 87 | +"""""""""""""""""""""""""""""""""""""""""""""""""""" |
| 88 | + |
| 89 | +Once the first time configuration above has been completed, one should |
| 90 | +activate conda in |
| 91 | +each new terminal window explicitly (it is discouraged to add this to your |
| 92 | +`.bashrc` or equivalent) |
| 93 | + |
| 94 | + .. code-block:: bash |
| 95 | +
|
| 96 | + . ${INSTALLATION_PATH}/bin/activate |
| 97 | + conda activate tc_build |
| 98 | +
|
| 99 | +Build TC with dependencies supplied by conda |
| 100 | +"""""""""""""""""""""""""""""""""""""""""""" |
| 101 | + |
| 102 | + .. code-block:: bash |
| 103 | +
|
| 104 | + CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh |
| 105 | +
|
| 106 | +You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing |
| 107 | +to your cuda installation (this is required for `FindCUDA.cmake` to find your |
| 108 | +cuda installation and can be omitted on most systems). When required, passing |
| 109 | +`CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda` is generally sufficient. |
| 110 | + |
| 111 | +Test locally |
| 112 | +"""""""""""" |
| 113 | +Run C++ tests: |
| 114 | + |
| 115 | + .. code-block:: bash |
| 116 | +
|
| 117 | + ./test.sh |
| 118 | +
|
| 119 | +Install the TC Python package locally to `/tmp` for smoke checking: |
| 120 | + |
| 121 | + .. code-block:: bash |
| 122 | +
|
| 123 | + python setup.py install --prefix=/tmp |
| 124 | + export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages) |
| 125 | +
|
| 126 | +Run Python smoke checks: |
| 127 | + |
| 128 | + .. code-block:: bash |
| 129 | +
|
| 130 | + python -c 'import torch' |
| 131 | + python -c 'import tensor_comprehensions' |
| 132 | +
|
| 133 | +Run Python tests: |
| 134 | + |
| 135 | + .. code-block:: bash |
| 136 | +
|
| 137 | + ./test_python/run_test.sh |
| 138 | +
|
| 139 | +At this point, if things work as expected you can venture installing as |
| 140 | +follows (it is always a good idea to record installed files for easy removal): |
| 141 | + |
| 142 | + .. code-block:: bash |
| 143 | +
|
| 144 | + python setup.py install --record tc_files.txt |
| 145 | +
|
| 146 | +Advanced / development mode installation |
| 147 | +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 148 | + |
| 149 | +Optional dependencies |
| 150 | +""""""""""""""""""""" |
| 151 | + |
| 152 | +Optionally if you want to use Caffe2 (this is necessary for building the C++ |
| 153 | +benchmarks since Caffe2 is our baseline): |
| 154 | + |
| 155 | + .. code-block:: bash |
| 156 | +
|
| 157 | + conda install -y -c conda-forge eigen |
| 158 | + conda install -y -c nicolasvasilache caffe2 |
| 159 | +
|
| 160 | +Cudnn version 7.1 in Caffe2 / dev mode |
| 161 | +"""""""""""""""""""""""""""""""""""""" |
| 162 | + |
| 163 | + .. note:: |
| 164 | + |
| 165 | + As of PyTorch 0.4, we need to package our own Caffe2. The curent |
| 166 | + PyTorch + Caffe2 build system links cudnn dynamically. The version of |
| 167 | + cudnn that is linked dynamically is imposed on us by the docker image |
| 168 | + supported by NVIDIA (see |
| 169 | + `Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_). |
| 170 | + For now this cudnn version is cudnn 7.1. |
| 171 | + |
| 172 | +If for some reason, one cannot install cudnn 7.1 system-wide, one may resort |
| 173 | +to the following: |
| 174 | + |
| 175 | + .. code-block:: bash |
| 176 | +
|
| 177 | + conda install -c anaconda cudnn |
| 178 | + conda remove -y cudatoolkit --force |
| 179 | +
|
| 180 | + .. note:: |
| 181 | + |
| 182 | + cudnn pulls a cudatoolkit dependency but this can never replace a |
| 183 | + system installation because it cannot package libcuda.so (which comes |
| 184 | + with the driver, not the toolkit). |
| 185 | + As a consequence cudatoolkit only contains redundant libraries and we |
| 186 | + remove it explicitly. In a near future, the unified PyTorch + Caffe2 |
| 187 | + build system will link everything statically and we will not need to |
| 188 | + worry about cudnn anymore. |
0 commit comments