Skip to content

Add the Latest Features For Basics Autograd Tutorial #3395

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open
11 changes: 10 additions & 1 deletion beginner_source/basics/autogradqs_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@
# - To mark some parameters in your neural network as **frozen parameters**.
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
# not track gradients would be more efficient.

# See this `note<https://docs.pytorch.org/docs/stable/notes/autograd.html#locally-disabling-gradient-computation>` for additional reference.

######################################################################

Expand All @@ -160,6 +160,15 @@
# - accumulates them in the respective tensor’s ``.grad`` attribute
# - using the chain rule, propagates all the way to the leaf tensors.
#
# To get a sense of what this computational graph looks like we can use the following tools:
#
# 1. torchviz is a package to visualize computational graphs
# <https://github.com/szagoruyko/pytorchviz>
#
# 2. TORCH_LOGS="+autograd" enables logging for the backward pass.
# <https://dev-discuss.pytorch.org/t/highlighting-a-few-recent-autograd-features-h2-2023/1787>
#
#
# .. note::
# **DAGs are dynamic in PyTorch**
# An important thing to note is that the graph is recreated from scratch; after each
Expand Down