Skip to content

Commit bc57306

Browse files
kiszkpytorchmergebot
authored andcommitted
Fix typo under docs directory and RELEASE.md (pytorch#85896)
This PR fixes typo in rst files under docs directory and `RELEASE.md`. Pull Request resolved: pytorch#85896 Approved by: https://github.com/kit1980
1 parent 11224f3 commit bc57306

File tree

7 files changed

+13
-13
lines changed

7 files changed

+13
-13
lines changed

RELEASE.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
- [Release Candidate health validation](#release-candidate-health-validation)
1515
- [Cherry Picking Fixes](#cherry-picking-fixes)
1616
- [Promoting RCs to Stable](#promoting-rcs-to-stable)
17-
- [Additonal Steps to prepare for release day](#additonal-steps-to-prepare-for-release-day)
17+
- [Additional Steps to prepare for release day](#additional-steps-to-prepare-for-release-day)
1818
- [Modify release matrix](#modify-release-matrix)
1919
- [Open Google Colab issue](#open-google-colab-issue)
2020
- [Patch Releases](#patch-releases)
@@ -186,7 +186,7 @@ Promotion should occur in two steps:
186186

187187
**NOTE**: The promotion of wheels to PyPI can only be done once so take caution when attempting to promote wheels to PyPI, (see https://github.com/pypa/warehouse/issues/726 for a discussion on potential draft releases within PyPI)
188188

189-
## Additonal Steps to prepare for release day
189+
## Additional Steps to prepare for release day
190190

191191
The following should be prepared for the release day
192192

@@ -264,7 +264,7 @@ For versions of Python that we support we follow the [NEP 29 policy](https://num
264264

265265
## Accelerator Software
266266

267-
For acclerator software like CUDA and ROCm we will typically use the following criteria:
267+
For accelerator software like CUDA and ROCm we will typically use the following criteria:
268268
* Support latest 2 minor versions
269269

270270
### Special support cases

docs/cpp/source/notes/tensor_cuda_stream.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Pytorch's C++ API provides the following ways to set CUDA stream:
6161
6262
.. attention::
6363

64-
This function may have nosthing to do with the current device. It only changes the current stream on the stream's device.
64+
This function may have nothing to do with the current device. It only changes the current stream on the stream's device.
6565
We recommend using ``CUDAStreamGuard``, instead, since it switches to the stream's device and makes it the current stream on that device.
6666
``CUDAStreamGuard`` will also restore the current device and stream when it's destroyed
6767

docs/source/notes/autograd.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ grad mode in the next forward pass.
203203

204204
The implementations in :ref:`nn-init-doc` also
205205
rely on no-grad mode when initializing the parameters as to avoid
206-
autograd tracking when updating the intialized parameters in-place.
206+
autograd tracking when updating the initialized parameters in-place.
207207

208208
Inference Mode
209209
^^^^^^^^^^^^^^

docs/source/quantization-support.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -543,7 +543,7 @@ as follows:
543543
544544
where :math:`\text{clamp}(.)` is the same as :func:`~torch.clamp` while the
545545
scale :math:`s` and zero point :math:`z` are then computed
546-
as decribed in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically:
546+
as described in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically:
547547

548548
.. math::
549549

docs/source/quantization.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ The following table compares the differences between Eager Mode Quantization and
8080
| |Static, Dynamic, |Static, Dynamic, |
8181
| |Weight Only |Weight Only |
8282
| | | |
83-
| |Quantiztion Aware |Quantiztion Aware |
83+
| |Quantization Aware |Quantization Aware |
8484
| |Training: |Training: |
8585
| |Static |Static |
8686
+-----------------+-------------------+-------------------+
@@ -632,15 +632,15 @@ Quantization Mode Support
632632
| |Quantization |Dataset | Works Best For | Accuracy | Notes |
633633
| |Mode |Requirement | | | |
634634
+-----------------------------+---------------------------------+--------------------+----------------+----------------+------------+-----------------+
635-
|Post Training Quantization |Dyanmic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, |
635+
|Post Training Quantization |Dynamic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, |
636636
| | |dynamically | |Embedding, | |close to static |
637637
| | |quantized (fp16, | |Transformer | |quantization when|
638638
| | |int8) or not | | | |performance is |
639639
| | |quantized, weight | | | |compute or memory|
640640
| | |statically quantized| | | |bound due to |
641641
| | |(fp16, int8, in4) | | | |weights |
642642
| +---------------------------------+--------------------+----------------+----------------+------------+-----------------+
643-
| |Static Quantization |acivation and |calibration |CNN |good |Provides best |
643+
| |Static Quantization |activation and |calibration |CNN |good |Provides best |
644644
| | |weights statically |dataset | | |perf, may have |
645645
| | |quantized (int8) | | | |big impact on |
646646
| | | | | | |accuracy, good |
@@ -652,7 +652,7 @@ Quantization Mode Support
652652
| | |weight are fake |dataset | | |for now |
653653
| | |quantized | | | | |
654654
| +---------------------------------+--------------------+----------------+----------------+------------+-----------------+
655-
| |Static Quantization |activatio nand |fine-tuning |CNN, MLP, |best |Typically used |
655+
| |Static Quantization |activation and |fine-tuning |CNN, MLP, |best |Typically used |
656656
| | |weight are fake |dataset |Embedding | |when static |
657657
| | |quantized | | | |quantization |
658658
| | | | | | |leads to bad |
@@ -736,7 +736,7 @@ Backend/Hardware Support
736736
+-----------------+---------------+------------+------------+------------+
737737
|server GPU |TensorRT (early|Not support |Supported |Static |
738738
| |prototype) |this it | |Quantization|
739-
| | |requries a | | |
739+
| | |requires a | | |
740740
| | |graph | | |
741741
+-----------------+---------------+------------+------------+------------+
742742

docs/source/rpc.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ machines.
1616
CUDA support was introduced in PyTorch 1.9 and is still a **beta** feature.
1717
Not all features of the RPC package are yet compatible with CUDA support and
1818
thus their use is discouraged. These unsupported features include: RRefs,
19-
JIT compatibility, dist autograd and dist optimizier, and profiling. These
19+
JIT compatibility, dist autograd and dist optimizer, and profiling. These
2020
shortcomings will be addressed in future releases.
2121
2222
.. note ::

docs/source/sparse.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -470,7 +470,7 @@ ncols, *densesize)`` where ``len(batchsize) == B`` and
470470

471471
The batches of sparse CSR tensors are dependent: the number of
472472
specified elements in all batches must be the same. This somewhat
473-
artifical constraint allows efficient storage of the indices of
473+
artificial constraint allows efficient storage of the indices of
474474
different CSR batches.
475475

476476
.. note::

0 commit comments

Comments
 (0)