Skip to content

Commit 7d2f1cd

Browse files
kiszkpytorchmergebot
authored andcommitted
Fix typos under docs directory (pytorch#88033)
This PR fixes typos in `.rst` and `.Doxyfile` files under docs directory Pull Request resolved: pytorch#88033 Approved by: https://github.com/soulitzer
1 parent c7ac333 commit 7d2f1cd

File tree

8 files changed

+8
-8
lines changed

8 files changed

+8
-8
lines changed

docs/caffe2/.Doxyfile-c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1490,7 +1490,7 @@ EXT_LINKS_IN_WINDOW = NO
14901490

14911491
FORMULA_FONTSIZE = 10
14921492

1493-
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
1493+
# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
14941494
# generated for formulas are transparent PNGs. Transparent PNGs are not
14951495
# supported properly for IE 6.0, but are supported on all modern browsers.
14961496
#

docs/caffe2/.Doxyfile-python

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1488,7 +1488,7 @@ EXT_LINKS_IN_WINDOW = NO
14881488

14891489
FORMULA_FONTSIZE = 10
14901490

1491-
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
1491+
# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
14921492
# generated for formulas are transparent PNGs. Transparent PNGs are not
14931493
# supported properly for IE 6.0, but are supported on all modern browsers.
14941494
#

docs/cpp/source/notes/tensor_cuda_stream.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ CUDA Stream Usage Examples
144144
// sum() on tensor0 use `myStream0` as current CUDA stream on device 0
145145
tensor0.sum();
146146
147-
// change the current device index to 1 by using CUDA device guard within a braket scope
147+
// change the current device index to 1 by using CUDA device guard within a bracket scope
148148
{
149149
at::cuda::CUDAGuard device_guard{1};
150150
// create a tensor on device 1

docs/source/cuda._sanitizer.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Here is an example of a simple synchronization error in PyTorch:
2929

3030
The ``a`` tensor is initialized on the default stream and, without any synchronization
3131
methods, modified on a new stream. The two kernels will run concurrently on the same tensor,
32-
which might cause the second kernel to read unitialized data before the first one was able
32+
which might cause the second kernel to read uninitialized data before the first one was able
3333
to write it, or the first kernel might overwrite part of the result of the second.
3434
When this script is run on the commandline with:
3535
::

docs/source/data.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ in real time.
6565

6666
See :class:`~torch.utils.data.IterableDataset` for more details.
6767

68-
.. note:: When using an :class:`~torch.utils.data.IterableDataset` with
68+
.. note:: When using a :class:`~torch.utils.data.IterableDataset` with
6969
`multi-process data loading <Multi-process data loading_>`_. The same
7070
dataset object is replicated on each worker process, and thus the
7171
replicas must be configured differently to avoid duplicated data. See

docs/source/fx.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ What is an FX transform? Essentially, it's a function that looks like this.
3636
# Step 3: Construct a Module to return
3737
return torch.fx.GraphModule(m, graph)
3838

39-
Your transform will take in an :class:`torch.nn.Module`, acquire a :class:`Graph`
39+
Your transform will take in a :class:`torch.nn.Module`, acquire a :class:`Graph`
4040
from it, do some modifications, and return a new
4141
:class:`torch.nn.Module`. You should think of the :class:`torch.nn.Module` that your FX
4242
transform returns as identical to a regular :class:`torch.nn.Module` -- you can pass it to another

docs/source/quantization-support.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -529,7 +529,7 @@ Quantized dtypes and quantization schemes
529529
Note that operator implementations currently only
530530
support per channel quantization for weights of the **conv** and **linear**
531531
operators. Furthermore, the input data is
532-
mapped linearly to the the quantized data and vice versa
532+
mapped linearly to the quantized data and vice versa
533533
as follows:
534534

535535
.. math::

docs/source/quantization.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -354,7 +354,7 @@ QAT API Example::
354354
# attach a global qconfig, which contains information about what kind
355355
# of observers to attach. Use 'fbgemm' for server inference and
356356
# 'qnnpack' for mobile inference. Other quantization configurations such
357-
# as selecting symmetric or assymetric quantization and MinMax or L2Norm
357+
# as selecting symmetric or asymmetric quantization and MinMax or L2Norm
358358
# calibration techniques can be specified here.
359359
model_fp32.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
360360

0 commit comments

Comments
 (0)