Skip to content

Commit 31eb1da

Browse files
authored
DTensor has moved to the public namespace (#3084)
1 parent 67ec2a5 commit 31eb1da

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

beginner_source/dist_overview.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Sharding primitives
3535

3636
``DTensor`` and ``DeviceMesh`` are primitives used to build parallelism in terms of sharded or replicated tensors on N-dimensional process groups.
3737

38-
- `DTensor <https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md>`__ represents a tensor that is sharded and/or replicated, and communicates automatically to reshard tensors as needed by operations.
38+
- `DTensor <https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/README.md>`__ represents a tensor that is sharded and/or replicated, and communicates automatically to reshard tensors as needed by operations.
3939
- `DeviceMesh <https://pytorch.org/docs/stable/distributed.html#devicemesh>`__ abstracts the accelerator device communicators into a multi-dimensional array, which manages the underlying ``ProcessGroup`` instances for collective communications in multi-dimensional parallelisms. Try out our `Device Mesh Recipe <https://pytorch.org/tutorials/recipes/distributed_device_mesh.html>`__ to learn more.
4040

4141
Communications APIs

recipes_source/distributed_comm_debug_mode.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ of parallel strategies to scale up distributed training. However, the lack of in
2121
between existing solutions poses a significant challenge, primarily due to the absence of a
2222
unified abstraction that can bridge these different parallelism strategies. To address this
2323
issue, PyTorch has proposed `DistributedTensor(DTensor)
24-
<https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/examples/comm_mode_features_example.py>`_
24+
<https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/examples/comm_mode_features_example.py>`_
2525
which abstracts away the complexities of tensor communication in distributed training,
2626
providing a seamless user experience. However, when dealing with existing parallelism solutions and
2727
developing parallelism solutions using the unified abstraction like DTensor, the lack of transparency
@@ -194,7 +194,7 @@ Below is the interactive module tree visualization that you can use to upload yo
194194
<input type="file" id="file-input" accept=".json">
195195
</div>
196196
<div id="tree-container"></div>
197-
<script src="https://cdn.jsdelivr.net/gh/pytorch/pytorch@main/torch/distributed/_tensor/debug/comm_mode_broswer_visual.js"></script>
197+
<script src="https://cdn.jsdelivr.net/gh/pytorch/pytorch@main/torch/distributed/tensor/debug/comm_mode_broswer_visual.js"></script>
198198
</body>
199199
</html>
200200

@@ -207,4 +207,4 @@ JSON outputs in the embedded visual browser.
207207

208208
For more detailed information about ``CommDebugMode``, see
209209
`comm_mode_features_example.py
210-
<https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/examples/comm_mode_features_example.py>`_
210+
<https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/examples/comm_mode_features_example.py>`_

0 commit comments

Comments
 (0)