Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit d15cb75

Browse files
Update writing_layers.rst
Fix broken link
1 parent 4aad2b1 commit d15cb75

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/framework/pytorch_integration/writing_layers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ functions. For example, assume one wants to use :code:`fmax` CUDA function in TC
200200
O = T.relu(torch.randn(100, 128, device='cuda'))
201201
202202
TC only supports a subset of built-in CUDA functions.
203-
Built-in functions supported in TC are listed in `this file <https://github.com/facebookresearch/TensorComprehensions/blob/master/tc/core/libraries.h#L67>`_.
203+
Built-in functions supported in TC are listed in `this file <https://github.com/facebookresearch/TensorComprehensions/blob/master/tc/core/cuda/cuda_libraries.h#L67>`_.
204204
Documentation
205205
for these functions is available as part of the official `CUDA documentation <http://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE>`_.
206206

0 commit comments

Comments
 (0)