|
99 | 99 | # Operations on Tensors
|
100 | 100 | # ~~~~~~~~~~~~~~~~~~~~~~~
|
101 | 101 | #
|
102 |
| -# Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, |
| 102 | +# Over 1200 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, |
103 | 103 | # indexing, slicing), sampling and more are
|
104 | 104 | # comprehensively described `here <https://pytorch.org/docs/stable/torch.html>`__.
|
105 | 105 | #
|
106 |
| -# Each of these operations can be run on the GPU (at typically higher speeds than on a |
107 |
| -# CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU. |
| 106 | +# Each of these operations can be run on the CPU and `Accelerator <https://pytorch.org/docs/stable/torch.html#accelerators>`__ |
| 107 | +# such as CUDA, MPS, MTIA, or XPU. If you’re using Colab, allocate an accelerator by going to Runtime > Change runtime type > GPU. |
108 | 108 | #
|
109 |
| -# By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using |
110 |
| -# ``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors |
| 109 | +# By default, tensors are created on the CPU. We need to explicitly move tensors to the accelerator using |
| 110 | +# ``.to`` method (after checking for accelerator availability). Keep in mind that copying large tensors |
111 | 111 | # across devices can be expensive in terms of time and memory!
|
112 | 112 |
|
113 |
| -# We move our tensor to the GPU if available |
114 |
| -if torch.cuda.is_available(): |
115 |
| - tensor = tensor.to("cuda") |
| 113 | +# We move our tensor to the current accelerator if available |
| 114 | +if torch.accelerator.is_available(): |
| 115 | + tensor = tensor.to(torch.accelerator.current_accelerator()) |
116 | 116 |
|
117 | 117 |
|
118 | 118 | ######################################################################
|
|
0 commit comments