Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit 076811f

Browse files
Update installation docs
1 parent 3fbe2eb commit 076811f

7 files changed

+182
-911
lines changed

BUILD.md

Lines changed: 1 addition & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -1,127 +1 @@
1-
# Important notice
2-
***In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0.***
3-
Other configurations may work too but are not yet officially supported.
4-
For more information about setting up the config that we use to build the conda dependencies see the following [Dockerfile](https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile).
5-
6-
Our main goal with this decision is to make the build procedure extremely simple, both reproducible internally and extensible to new targets in the future.
7-
In particular the gcc-4 / gcc-5 ABI switch is not something we want to concern ourselves with at this point, we go for gcc-5.4.0.
8-
9-
# Prerequisites
10-
Building TC from source requires `gmp`. To install on Ubuntu16.04:
11-
12-
```sudo apt-get install libgmp3-dev`
13-
14-
More generally, the [Dockerfile](https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile) shows the environment we are using for testing and building conda pacakges.
15-
16-
# Conda from scratch (first time configuration)
17-
Choose and set an INSTALLATION_PATH then run the following:
18-
19-
```
20-
wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh -O anaconda.sh && \
21-
chmod +x anaconda.sh && \
22-
./anaconda.sh -b -p ${INSTALLATION_PATH} && \
23-
rm anaconda.sh
24-
25-
. ${INSTALLATION_PATH}/bin/activate
26-
conda update -y -n base conda
27-
```
28-
29-
Create a new environment in which TC will be built and install core dependencies:
30-
```
31-
conda create -y --name tc_build python=3.6
32-
conda activate tc_build
33-
conda install -y pyyaml mkl-include pytest
34-
conda install -y -c nicolasvasilache llvm-tapir50 halide
35-
```
36-
37-
Then install the PyTorch version that corresponds to your system binaries (e.g. for PyTorch with cuda 9.0):
38-
```
39-
conda install -y -c pytorch pytorch torchvision cuda90
40-
conda remove -y cudatoolkit --force
41-
```
42-
43-
***Note*** As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it
44-
pulls cudatoolkit. However cudatoolkit can never replace a system installation
45-
because it cannot package libcuda.so (which comes with the driver, not the toolkit).
46-
As a consequence cudatoolkit only contains redundant libraries and we remove it
47-
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
48-
everything statically and stop pulling the cudatoolkit dependency.
49-
50-
# Activate preinstalled conda in your current terminal
51-
52-
Once the first time configuration above has been completed, one should activate conda in
53-
each new terminal window explicitly (it is discouraged to add this to your `.bashrc` or
54-
equivalent)
55-
```
56-
. ${INSTALLATION_PATH}/bin/activate
57-
conda activate tc_build
58-
```
59-
60-
# Build TC with dependencies supplied by conda
61-
```
62-
CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh
63-
```
64-
You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing
65-
to your cuda installation (this is required for `FindCUDA.cmake` to find your cuda installation
66-
and can be omitted on most systems). When required, passing `CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda`
67-
is generally sufficient.
68-
69-
# Test locally
70-
Run C++ tests:
71-
```
72-
./test.sh
73-
```
74-
75-
Install the TC Python package locally to `/tmp` for smoke checking:
76-
```
77-
python setup.py install --prefix=/tmp
78-
export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages)
79-
```
80-
81-
Run Python smoke checks:
82-
```
83-
python -c 'import torch'
84-
python -c 'import tensor_comprehensions'
85-
```
86-
87-
Run Python tests:
88-
```
89-
./test_python/run_test.sh
90-
```
91-
92-
At this point, if things work as expected you can venture installing as follows
93-
(always a good idea to record installed files for easy removal):
94-
```
95-
python setup.py install --record tc_files.txt
96-
```
97-
98-
# Advanced / development mode installation
99-
100-
## Optional dependencies
101-
Optionally if you want to use Caffe2 (this is necessary for building the C++ benchmarks
102-
since Caffe2 is our baseline):
103-
```
104-
conda install -y -c conda-forge eigen
105-
conda install -y -c nicolasvasilache caffe2
106-
```
107-
108-
## Cudnn version 7.1 in Caffe2 / dev mode
109-
***Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
110-
build system links cudnn dynamically. The version of cudnn that is linked dynamically
111-
is imposed on us by the docker image supported by NVIDIA
112-
[Dockerfile](conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile).
113-
For now this cudnn version is cudnn 7.1.
114-
115-
If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
116-
following:
117-
```
118-
conda install -c anaconda cudnn
119-
conda remove -y cudatoolkit --force
120-
```
121-
122-
***Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
123-
installation because it cannot package libcuda.so (which comes with the driver,
124-
not the toolkit).
125-
As a consequence cudatoolkit only contains redundant libraries and we remove it
126-
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
127-
everything statically and we will not need to worry about cudnn anymore.
1+
see the [instructions](docs/source/installation.rst).

docs/source/index.rst

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -55,11 +55,7 @@ Machine Learning.
5555
:caption: Installation
5656

5757
installation
58-
installation_docker_image
59-
installation_conda_dep
60-
installation_conda
61-
installation_non_conda
62-
58+
installation_colab_research
6359

6460
.. toctree::
6561
:maxdepth: 1

docs/source/installation.rst

Lines changed: 180 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,188 @@
11
Installation Guide
22
==================
33

4-
**Author**: `Priya Goyal <https://github.com/prigoyal>`_
4+
.. note::
55

6-
The following instructions are provided for developers who would like to
7-
experiment with the library.
6+
Since Tensor Comprehensions is still a research project in its infancy
7+
it is recommended to build the master from source.
88

9-
At the moment, only :code:`Ubuntu 14.04` and :code:`Ubuntu 16.04` configurations are
10-
officially supported. Additionally, we routinely run on a custom CentOS7
11-
installation. If you are interested in running on non-Ubuntu configurations
12-
please reach out and we will do our best to assist you. Contributing back new
13-
docker configurations to provide a stable environment to build from source on
14-
new systems would be highly appreciated. Please read the :code:`docker/README.md` for how
15-
to build new docker images.
9+
Conda installation
10+
^^^^^^^^^^^^^^^^^^
11+
You can install the latest released package for tensor comprehensions as follows:
1612

17-
Some users might prefer building TC in :code:`non-conda` enviroment and some might prefer building in :code:`conda` enviroment. We provide installation instructions for both environments.
13+
.. code-block:: bash
1814
19-
Further, we also provide runtime :code:`docker` images for both :code:`conda` and :code:`non-conda` environment and also an :code:`nvidia-docker` runtime image for TC to have access to GPUs.
15+
conda install -y -c pytorch -c tensorcomp tensor_comprehensions
2016
21-
You can chose whatever build settings suit your requirements best and follow the instructions to build. Please feel free to contact us in case you need help with build.
17+
Build from source
18+
^^^^^^^^^^^^^^^^^
19+
20+
.. note::
21+
22+
**In order to uniformize and simplify the build system we had to make
23+
choices. TC is currently only officially supported on Ubuntu 16.04 with
24+
gcc 5.4.0 and takes most its dependencies from conda packages.**
25+
Other configurations may work too but are not yet officially supported.
26+
For more information about the setup we use to build the conda
27+
dependencies see the following `Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_.
28+
Our main goal with this decision is to make the build procedure
29+
extremely simple, both reproducible internally and extensible to new
30+
targets in the future. In particular the gcc-4 / gcc-5 ABI switch is
31+
not something we want to concern ourselves with at this point, we go
32+
for gcc-5.4.0.
33+
34+
Prerequisites
35+
"""""""""""""
36+
Building TC from source requires gmp, cmake (v3.10 or higher), automake
37+
and libtool. It is generally a good idea to look at the
38+
`Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_
39+
which shows the environment we use for testing and building our packages.
40+
To install on Ubuntu16.04:
41+
42+
.. code-block:: bash
43+
44+
sudo apt-get install libgmp3-dev cmake automake libtool
45+
46+
Conda from scratch (first time configuration)
47+
"""""""""""""""""""""""""""""""""""""""""""""
48+
Choose and set an INSTALLATION_PATH then run the following:
49+
50+
.. code-block:: bash
51+
52+
wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh -O anaconda.sh && \
53+
chmod +x anaconda.sh && \
54+
./anaconda.sh -b -p ${INSTALLATION_PATH} && \
55+
rm anaconda.sh
56+
57+
. ${INSTALLATION_PATH}/bin/activate
58+
conda update -y -n base conda
59+
60+
Create a new environment in which TC will be built and install core dependencies:
61+
62+
.. code-block:: bash
63+
64+
conda create -y --name tc_build python=3.6
65+
conda activate tc_build
66+
conda install -y pyyaml mkl-include pytest
67+
conda install -y -c nicolasvasilache llvm-tapir50 halide
68+
69+
Then install the PyTorch version that corresponds to your system binaries
70+
(e.g. cuda 9.0):
71+
72+
.. code-block:: bash
73+
74+
conda install -y -c pytorch pytorch torchvision cuda90
75+
conda remove -y cudatoolkit --force
76+
77+
.. note::
78+
As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it
79+
pulls cudatoolkit. However cudatoolkit can never replace a system installation
80+
because it cannot package libcuda.so (which comes with the driver, not
81+
the toolkit). As a consequence cudatoolkit only contains redundant
82+
libraries and we remove it explicitly. In a near future, the unified
83+
PyTorch + Caffe2 build system will link everything statically and stop
84+
pulling the cudatoolkit dependency.
85+
86+
Activate conda in your current terminal
87+
""""""""""""""""""""""""""""""""""""""""""""""""""""
88+
89+
Once the first time configuration above has been completed, one should
90+
activate conda in
91+
each new terminal window explicitly (it is discouraged to add this to your
92+
`.bashrc` or equivalent)
93+
94+
.. code-block:: bash
95+
96+
. ${INSTALLATION_PATH}/bin/activate
97+
conda activate tc_build
98+
99+
Build TC with dependencies supplied by conda
100+
""""""""""""""""""""""""""""""""""""""""""""
101+
102+
.. code-block:: bash
103+
104+
CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh
105+
106+
You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing
107+
to your cuda installation (this is required for `FindCUDA.cmake` to find your
108+
cuda installation and can be omitted on most systems). When required, passing
109+
`CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda` is generally sufficient.
110+
111+
Test locally
112+
""""""""""""
113+
Run C++ tests:
114+
115+
.. code-block:: bash
116+
117+
./test.sh
118+
119+
Install the TC Python package locally to `/tmp` for smoke checking:
120+
121+
.. code-block:: bash
122+
123+
python setup.py install --prefix=/tmp
124+
export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages)
125+
126+
Run Python smoke checks:
127+
128+
.. code-block:: bash
129+
130+
python -c 'import torch'
131+
python -c 'import tensor_comprehensions'
132+
133+
Run Python tests:
134+
135+
.. code-block:: bash
136+
137+
./test_python/run_test.sh
138+
139+
At this point, if things work as expected you can venture installing as
140+
follows (it is always a good idea to record installed files for easy removal):
141+
142+
.. code-block:: bash
143+
144+
python setup.py install --record tc_files.txt
145+
146+
Advanced / development mode installation
147+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
148+
149+
Optional dependencies
150+
"""""""""""""""""""""
151+
152+
Optionally if you want to use Caffe2 (this is necessary for building the C++
153+
benchmarks since Caffe2 is our baseline):
154+
155+
.. code-block:: bash
156+
157+
conda install -y -c conda-forge eigen
158+
conda install -y -c nicolasvasilache caffe2
159+
160+
Cudnn version 7.1 in Caffe2 / dev mode
161+
""""""""""""""""""""""""""""""""""""""
162+
163+
.. note::
164+
165+
As of PyTorch 0.4, we need to package our own Caffe2. The curent
166+
PyTorch + Caffe2 build system links cudnn dynamically. The version of
167+
cudnn that is linked dynamically is imposed on us by the docker image
168+
supported by NVIDIA (see
169+
`Dockerfile <https://github.com/facebookresearch/TensorComprehensions/blob/master/conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile>`_).
170+
For now this cudnn version is cudnn 7.1.
171+
172+
If for some reason, one cannot install cudnn 7.1 system-wide, one may resort
173+
to the following:
174+
175+
.. code-block:: bash
176+
177+
conda install -c anaconda cudnn
178+
conda remove -y cudatoolkit --force
179+
180+
.. note::
181+
182+
cudnn pulls a cudatoolkit dependency but this can never replace a
183+
system installation because it cannot package libcuda.so (which comes
184+
with the driver, not the toolkit).
185+
As a consequence cudatoolkit only contains redundant libraries and we
186+
remove it explicitly. In a near future, the unified PyTorch + Caffe2
187+
build system will link everything statically and we will not need to
188+
worry about cudnn anymore.

0 commit comments

Comments
 (0)