Skip to content

Commit d40a454

Browse files
kiszkpytorchmergebot
authored andcommitted
Fix typo under docs directory (pytorch#92762)
This PR fixes typo and URL (`http -> https`) in `rst` files under `docs` directory Pull Request resolved: pytorch#92762 Approved by: https://github.com/H-Huang
1 parent 8f294f7 commit d40a454

9 files changed

+15
-15
lines changed

docs/source/dynamo/faq.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ generated:
231231
How are you speeding up my code?
232232
--------------------------------
233233

234-
There are 3 major ways to accelerat PyTorch code:
234+
There are 3 major ways to accelerate PyTorch code:
235235

236236
1. Kernel fusion via vertical fusions which fuse sequential operations to avoid
237237
excessive read/writes. For example, fuse 2 subsequent cosines means you

docs/source/dynamo/guards-overview.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ mind:
275275

276276
- It stores the variable ``source`` of type ``Source``, from
277277
``torchdynamo/source.py``. This source type is a relatively self
278-
contained class that helps us organize and bookeep where the original
278+
contained class that helps us organize and bookkeep where the original
279279
source came from, and helps provide convenience methods for things
280280
like getting the name, and importantly for us, producing guards.
281281

docs/source/dynamo/troubleshooting.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -650,7 +650,7 @@ to detect bugs in our codegen or with a backend compiler.
650650
File an Issue
651651
~~~~~~~~~~~~~
652652

653-
If you experience problems with TorchDynamo, `file a github
653+
If you experience problems with TorchDynamo, `file a GitHub
654654
issue <https://github.com/pytorch/torchdynamo/issues>`__.
655655

656656
Before filing an issue, read over the `README <../README.md>`__,

docs/source/elastic/kubernetes.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
TorchElastic Kubernetes
22
==========================
33

4-
Please refer to our github's `Kubernetes README <https://github.com/pytorch/elastic/tree/master/kubernetes>`_
4+
Please refer to our GitHub's `Kubernetes README <https://github.com/pytorch/elastic/tree/master/kubernetes>`_
55
for more information on Elastic Job Controller and custom resource definition.

docs/source/func.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ torch.func, previously known as "functorch", is
1313
may change under user feedback and we don't have full coverage over PyTorch operations.
1414

1515
If you have suggestions on the API or use-cases you'd like to be covered, please
16-
open an github issue or reach out. We'd love to hear about how you're using the library.
16+
open an GitHub issue or reach out. We'd love to hear about how you're using the library.
1717

1818
What are composable function transforms?
1919
----------------------------------------

docs/source/hub.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Publishing models
66
-----------------
77

88
Pytorch Hub supports publishing pre-trained models(model definitions and pre-trained weights)
9-
to a github repository by adding a simple ``hubconf.py`` file;
9+
to a GitHub repository by adding a simple ``hubconf.py`` file;
1010

1111
``hubconf.py`` can have multiple entrypoints. Each entrypoint is defined as a python function
1212
(example: a pre-trained model you want to publish).
@@ -49,15 +49,15 @@ You can see the full script in
4949
are the allowed positional/keyword arguments. It's highly recommended to add a few examples here.
5050
- Entrypoint function can either return a model(nn.module), or auxiliary tools to make the user workflow smoother, e.g. tokenizers.
5151
- Callables prefixed with underscore are considered as helper functions which won't show up in :func:`torch.hub.list()`.
52-
- Pretrained weights can either be stored locally in the github repo, or loadable by
52+
- Pretrained weights can either be stored locally in the GitHub repo, or loadable by
5353
:func:`torch.hub.load_state_dict_from_url()`. If less than 2GB, it's recommended to attach it to a `project release <https://help.github.com/en/articles/distributing-large-binaries>`_
5454
and use the url from the release.
5555
In the example above ``torchvision.models.resnet.resnet18`` handles ``pretrained``, alternatively you can put the following logic in the entrypoint definition.
5656

5757
::
5858

5959
if pretrained:
60-
# For checkpoint saved in local github repo, e.g. <RELATIVE_PATH_TO_CHECKPOINT>=weights/save.pth
60+
# For checkpoint saved in local GitHub repo, e.g. <RELATIVE_PATH_TO_CHECKPOINT>=weights/save.pth
6161
dirname = os.path.dirname(__file__)
6262
checkpoint = os.path.join(dirname, <RELATIVE_PATH_TO_CHECKPOINT>)
6363
state_dict = torch.load(checkpoint)
@@ -131,7 +131,7 @@ By default, we don't clean up files after loading it. Hub uses the cache by defa
131131
directory returned by :func:`~torch.hub.get_dir()`.
132132

133133
Users can force a reload by calling ``hub.load(..., force_reload=True)``. This will delete
134-
the existing github folder and downloaded weights, reinitialize a fresh download. This is useful
134+
the existing GitHub folder and downloaded weights, reinitialize a fresh download. This is useful
135135
when updates are published to the same branch, users can keep up with the latest release.
136136

137137

@@ -144,7 +144,7 @@ This also means that you may have import errors when importing different models
144144
from different repos, if the repos have the same sub-package names (typically, a
145145
``model`` subpackage). A workaround for these kinds of import errors is to
146146
remove the offending sub-package from the ``sys.modules`` dict; more details can
147-
be found in `this github issue
147+
be found in `this GitHub issue
148148
<https://github.com/pytorch/hub/issues/243#issuecomment-942403391>`_.
149149

150150
A known limitation that is worth mentioning here: users **CANNOT** load two different branches of

docs/source/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ Features described in this documentation are classified by release status:
144144
TorchServe <https://pytorch.org/serve>
145145
torchtext <https://pytorch.org/text/stable>
146146
torchvision <https://pytorch.org/vision/stable>
147-
PyTorch on XLA Devices <http://pytorch.org/xla/>
147+
PyTorch on XLA Devices <https://pytorch.org/xla/>
148148

149149
Indices and tables
150150
==================

docs/source/jit_language_reference_v2.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -1847,11 +1847,11 @@ only usable within TorchScript:
18471847
- ``torch.jit.fork()``
18481848
- Creates an asynchronous task executing func and a reference to the value of the result of this execution. Fork will return immediately.
18491849
- Synonymous to ``torch.jit._fork()``, which is only kept for backward compatibility reasons.
1850-
- More deatils about its usage and examples can be found in :meth:`~torch.jit.fork`.
1850+
- More details about its usage and examples can be found in :meth:`~torch.jit.fork`.
18511851
- ``torch.jit.wait()``
18521852
- Forces completion of a ``torch.jit.Future[T]`` asynchronous task, returning the result of the task.
18531853
- Synonymous to ``torch.jit._wait()``, which is only kept for backward compatibility reasons.
1854-
- More deatils about its usage and examples can be found in :meth:`~torch.jit.wait`.
1854+
- More details about its usage and examples can be found in :meth:`~torch.jit.wait`.
18551855

18561856

18571857
.. _torch_apis_in_torchscript_annotation:

docs/source/quantization.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ PTSQ API Example::
258258
# attach a global qconfig, which contains information about what kind
259259
# of observers to attach. Use 'x86' for server inference and 'qnnpack'
260260
# for mobile inference. Other quantization configurations such as selecting
261-
# symmetric or assymetric quantization and MinMax or L2Norm calibration techniques
261+
# symmetric or asymmetric quantization and MinMax or L2Norm calibration techniques
262262
# can be specified here.
263263
# Note: the old 'fbgemm' is still available but 'x86' is the recommended default
264264
# for server inference.
@@ -357,7 +357,7 @@ QAT API Example::
357357
# attach a global qconfig, which contains information about what kind
358358
# of observers to attach. Use 'x86' for server inference and 'qnnpack'
359359
# for mobile inference. Other quantization configurations such as selecting
360-
# symmetric or assymetric quantization and MinMax or L2Norm calibration techniques
360+
# symmetric or asymmetric quantization and MinMax or L2Norm calibration techniques
361361
# can be specified here.
362362
# Note: the old 'fbgemm' is still available but 'x86' is the recommended default
363363
# for server inference.

0 commit comments

Comments
 (0)