Skip to content

Commit e3e0853

Browse files
authored
Setup notebook testing (tensorflow#2160)
* Setup notebook testing * Trivial change * Run on all notebooks * Format networks_seq2seq_nmt.ipynb * Lint all notebooks * Change repo name * Expose template to be formatted
1 parent 7f6bcf7 commit e3e0853

7 files changed

+259
-356
lines changed

.github/workflows/ci_test.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,3 +95,32 @@ jobs:
9595
- run: pip install pygithub click
9696
- name: Check that the CODEOWNERS is valid
9797
run: python .github/workflows/notify_codeowners.py .github/CODEOWNERS
98+
nbfmt:
99+
name: Notebook format
100+
runs-on: ubuntu-latest
101+
steps:
102+
- uses: actions/setup-python@v1
103+
- uses: actions/checkout@v2
104+
- name: Install tensorflow-docs
105+
run: python3 -m pip install -U git+https://github.com/tensorflow/docs
106+
- name: Check notebook formatting
107+
run: |
108+
# Run on all notebooks to prevent upstream change.
109+
echo "Check formatting with nbfmt:"
110+
python3 -m tensorflow_docs.tools.nbfmt --test \
111+
$(find docs/tutorials/ -type f -name *.ipynb)
112+
nblint:
113+
name: Notebook lint
114+
runs-on: ubuntu-latest
115+
steps:
116+
- uses: actions/setup-python@v1
117+
- uses: actions/checkout@v2
118+
- name: Install tensorflow-docs
119+
run: python3 -m pip install -U git+https://github.com/tensorflow/docs
120+
- name: Lint notebooks
121+
run: |
122+
# Run on all notebooks to prevent upstream change.
123+
echo "Lint check with nblint:"
124+
python3 -m tensorflow_docs.tools.nblint \
125+
--arg=repo:tensorflow/addons \
126+
$(find docs/tutorials/ -type f -name *.ipynb ! -path "docs/tutorials/_template.ipynb")

docs/tutorials/_template.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@
5757
" <a target=\"_blank\" href=\"https://github.com/tensorflow/addons/blob/master/docs/tutorials/_template.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
5858
" </td>\n",
5959
" <td>\n",
60-
" <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/docs/tutorials/_template.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
60+
" <a href=\"https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/_template.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
6161
" </td>\n",
6262
"</table>"
6363
]

docs/tutorials/image_ops.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464
"## Overview\n",
6565
"This notebook will demonstrate how to use the some image operations in TensorFlow Addons.\n",
6666
"\n",
67-
"Here is the list of image operations we'll be covering in this example:\n",
67+
"Here is the list of image operations you'll be covering in this example:\n",
6868
"\n",
6969
"- `tfa.image.mean_filter2d`\n",
7070
"\n",

docs/tutorials/layers_weightnormalization.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@
7878
"\n",
7979
"Tim Salimans, Diederik P. Kingma (2016)\n",
8080
"\n",
81-
"> By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time.\n",
81+
"> By reparameterizing the weights in this way you improve the conditioning of the optimization problem and speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time.\n",
8282
"\n",
8383
"> https://arxiv.org/abs/1602.07868 \n",
8484
"\n",

docs/tutorials/losses_triplet.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@
8484
"\n",
8585
"![function](https://user-images.githubusercontent.com/18154355/61484709-7589b800-a96d-11e9-9c3c-e880514af4b7.png)\n",
8686
"\n",
87-
"Where A is our anchor input, P is the positive sample input, N is the negative sample input, and alpha is some margin we use to specify when a triplet has become too \"easy\" and we no longer want to adjust the weights from it."
87+
"Where A is our anchor input, P is the positive sample input, N is the negative sample input, and alpha is some margin you use to specify when a triplet has become too \"easy\" and you no longer want to adjust the weights from it."
8888
]
8989
},
9090
{
@@ -94,7 +94,7 @@
9494
},
9595
"source": [
9696
"## SemiHard Online Learning\n",
97-
"As shown in the paper, the best results are from triplets known as \"Semi-Hard\". These are defined as triplets where the negative is farther from the anchor than the positive, but still produces a positive loss. To efficiently find these triplets we utilize online learning and only train from the Semi-Hard examples in each batch. \n"
97+
"As shown in the paper, the best results are from triplets known as \"Semi-Hard\". These are defined as triplets where the negative is farther from the anchor than the positive, but still produces a positive loss. To efficiently find these triplets you utilize online learning and only train from the Semi-Hard examples in each batch. \n"
9898
]
9999
},
100100
{

0 commit comments

Comments
 (0)