Skip to content

Commit 2f4a4c0

Browse files
author
Svetlana Karslioglu
authored
Update what's new with 2.0 tutorials (#2255)
* Update what's new with 2.0 tutorials * Add notes on running in colab to the SDPA tutorial
1 parent e05cd19 commit 2f4a4c0

File tree

2 files changed

+16
-8
lines changed

2 files changed

+16
-8
lines changed

index.rst

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,15 @@ Welcome to PyTorch Tutorials
33

44
What's new in PyTorch tutorials?
55

6-
* `PyTorch Distributed Series <https://pytorch.org/tutorials/beginner/ddp_series_intro.html?utm_source=whats_new_tutorials&utm_medium=ddp_series_intro>`__
7-
* `Fast Transformer Inference with Better Transformer <https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html?utm_source=whats_new_tutorials&utm_medium=bettertransformer>`__
8-
* `Advanced model training with Fully Sharded Data Parallel (FSDP) <https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html?utm_source=whats_new_tutorials&utm_medium=FSDP_advanced>`__
9-
* `Grokking PyTorch Intel CPU Performance from First Principles <https://pytorch.org/tutorials/intermediate/torchserve_with_ipex?utm_source=whats_new_tutorials&utm_medium=torchserve_ipex>`__
6+
* `Implementing High Performance Transformers with Scaled Dot Product Attention <https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html?utm_source=whats_new_tutorials&utm_medium=scaled_dot_product_attention_tutorial>`__
7+
* `torch.compile Tutorial <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html?utm_source=whats_new_tutorials&utm_medium=torch_compile>`__
8+
* `Per Sample Gradients <https://pytorch.org/tutorials/intermediate/per_sample_grads.html?utm_source=whats_new_tutorials&utm_medium=per_sample_grads>`__
9+
* `Jacobians, Hessians, hvp, vhp, and more: composing function transforms <https://pytorch.org/tutorials/intermediate/jacobians_hessians.html?utm_source=whats_new_tutorials&utm_medium=jacobians_hessians>`__
10+
* `Model Ensembling <https://pytorch.org/tutorials/intermediate/ensembling.html?utm_source=whats_new_tutorials&utm_medium=ensembling>`__
11+
* `Neural Tangent Kernels <https://pytorch.org/tutorials/intermediate/neural_tangent_kernels.html?utm_source=whats_new_tutorials&utm_medium=neural_tangent_kernels>`__
12+
* `Reinforcement Learning (PPO) with TorchRL Tutorial <https://pytorch.org/tutorials/intermediate/reinforcement_ppo.html?utm_source=whats_new_tutorials&utm_medium=reinforcement_ppo>`__
13+
* `Changing Default Device <https://pytorch.org/tutorials/recipes/recipes/changing_default_device.html?utm_source=whats_new_tutorials&utm_medium=changing_default_device>`__
14+
1015

1116
.. raw:: html
1217

intermediate_source/scaled_dot_product_attention_tutorial.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
"""
2-
Implementing High-Performance Transformers with SCALED DOT PRODUCT ATTENTION
3-
================================================================================
2+
(Beta) Implementing High-Performance Transformers with Scaled Dot Product Attention (SDPA)
3+
==========================================================================================
44
5-
"""
65
6+
**Author:** `Driss Guessous <https://github.com/drisspg>`_
7+
"""
78

89
######################################################################
910
# Summary
@@ -34,6 +35,8 @@
3435
# * `Memory-Efficient Attention <https://github.com/facebookresearch/xformers>`__
3536
# * A PyTorch implementation defined in C++
3637
#
38+
# .. literalinclude:: ../beginner_source/new-release-colab.rst
39+
# :language: rst
3740

3841
import torch
3942
import torch.nn as nn
@@ -334,4 +337,4 @@ def generate_rand_batch(
334337
# compilable. In the process we have shown how to the profiling tools can
335338
# be used to explore the performance characteristics of a user defined
336339
# module.
337-
#
340+
#

0 commit comments

Comments
 (0)