Skip to content

Commit 3f302a3

Browse files
guangyeysvekars
andauthored
[2/N] Refine beginner tutorial by accelerator api (#3168)
Co-authored-by: Svetlana Karslioglu <[email protected]>
1 parent b9b1656 commit 3f302a3

File tree

2 files changed

+12
-18
lines changed

2 files changed

+12
-18
lines changed

beginner_source/basics/quickstart_tutorial.py

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -84,16 +84,10 @@
8484
# To define a neural network in PyTorch, we create a class that inherits
8585
# from `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html>`_. We define the layers of the network
8686
# in the ``__init__`` function and specify how data will pass through the network in the ``forward`` function. To accelerate
87-
# operations in the neural network, we move it to the GPU or MPS if available.
88-
89-
# Get cpu, gpu or mps device for training.
90-
device = (
91-
"cuda"
92-
if torch.cuda.is_available()
93-
else "mps"
94-
if torch.backends.mps.is_available()
95-
else "cpu"
96-
)
87+
# operations in the neural network, we move it to the `accelerator <https://pytorch.org/docs/stable/torch.html#accelerators>`__
88+
# such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.
89+
90+
device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
9791
print(f"Using {device} device")
9892

9993
# Define model

beginner_source/basics/tensorqs_tutorial.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -99,20 +99,20 @@
9999
# Operations on Tensors
100100
# ~~~~~~~~~~~~~~~~~~~~~~~
101101
#
102-
# Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing,
102+
# Over 1200 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing,
103103
# indexing, slicing), sampling and more are
104104
# comprehensively described `here <https://pytorch.org/docs/stable/torch.html>`__.
105105
#
106-
# Each of these operations can be run on the GPU (at typically higher speeds than on a
107-
# CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU.
106+
# Each of these operations can be run on the CPU and `Accelerator <https://pytorch.org/docs/stable/torch.html#accelerators>`__
107+
# such as CUDA, MPS, MTIA, or XPU. If you’re using Colab, allocate an accelerator by going to Runtime > Change runtime type > GPU.
108108
#
109-
# By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using
110-
# ``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors
109+
# By default, tensors are created on the CPU. We need to explicitly move tensors to the accelerator using
110+
# ``.to`` method (after checking for accelerator availability). Keep in mind that copying large tensors
111111
# across devices can be expensive in terms of time and memory!
112112

113-
# We move our tensor to the GPU if available
114-
if torch.cuda.is_available():
115-
tensor = tensor.to("cuda")
113+
# We move our tensor to the current accelerator if available
114+
if torch.accelerator.is_available():
115+
tensor = tensor.to(torch.accelerator.current_accelerator())
116116

117117

118118
######################################################################

0 commit comments

Comments
 (0)