Skip to content

Add the Latest Features For Basics Autograd Tutorial #3395

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open
38 changes: 36 additions & 2 deletions beginner_source/basics/autogradqs_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
y = torch.zeros(3) # expected output
w = torch.randn(5, 3, requires_grad=True)
b = torch.randn(3, requires_grad=True)
z = torch.matmul(x, w)+b
z = torch.matmul(x, w) + b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)


Expand Down Expand Up @@ -133,7 +133,8 @@
# - To mark some parameters in your neural network as **frozen parameters**.
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
# not track gradients would be more efficient.

# For additional reference, you can view the autograd mechanics
# documentation:https://docs.pytorch.org/docs/stable/notes/autograd.html#locally-disabling-gradient-computation

######################################################################

Expand All @@ -160,6 +161,39 @@
# - accumulates them in the respective tensor’s ``.grad`` attribute
# - using the chain rule, propagates all the way to the leaf tensors.
#
# We can also visualize the computational graph by the following 2 methods:
Copy link
Contributor

@soulitzer soulitzer Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for this section, we can just keep it short for now, and link to the relevant resource:

To get a sense of what this computational graph looks like we can use the following tools:

1. torchviz is a package to visualize computational graphs
https://github.com/szagoruyko/pytorchviz

2. TORCH_LOGS="+autograd" enables logging for the backward pass. 
https://dev-discuss.pytorch.org/t/highlighting-a-few-recent-autograd-features-h2-2023/1787

(for the links use the proper hyperlink syntax)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

#
# 1. TORCH_LOGS="+autograd"
# By setting the TORCH_LOGS="+autograd" environment variable, we can enable runtime autograd logs for debugging.
#
# We can perform the logging in the following manner:
# TORCH_LOGS="+autograd" python test.py
#
# 2. Torchviz
# Torchviz is a package to render the computational graph visually.
#
# We can generate an image for the computational graph in the example given below:
#
# import torch
# from torch import nn
# from torchviz import make_dot
#
# model = nn.Sequential(
# nn.Linear(8, 16),
# nn.ReLU(),
# nn.Linear(16, 1)
# )

# x = torch.randn(1, 8, requires_grad=True)
# y = model(x).mean()

# log the internal operations using torchviz
# import os
# os.environ['TORCH_LOGS'] = "+autograd"

# dot = make_dot(y, params=dict(model.named_parameters()), show_attrs=True, show_saved=True)
# dot.render('simple_graph', format='png')
#
# .. note::
# **DAGs are dynamic in PyTorch**
# An important thing to note is that the graph is recreated from scratch; after each
Expand Down