Skip to content

Commit

Permalink
update api urls (#509)
Browse files Browse the repository at this point in the history
  • Loading branch information
doombeaker authored Jul 29, 2022
1 parent a099f4a commit 0d0fbb1
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion cn/docs/cookies/activation_checkpointing.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ optimizer = flow.optim.SGD([{'params': model_part1.parameters()},
lr=1e-3)
```

如果要开启 activation checkpointing,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中的 Eager 模型成员 (即 nn.Module 对象) 上指定 `.config.activation_checkpointing = True`。此 API 详见:[activation_checkpointing](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing)。对于每个打开 "activation checkpointing" 的 nn.Module,其输入 activation 将会被保留,而其它中间 activation 在反向传播过程中被使用时会被重新计算。
如果要开启 activation checkpointing,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中的 Eager 模型成员 (即 nn.Module 对象) 上指定 `.config.activation_checkpointing = True`。此 API 详见:[activation_checkpointing](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing.html)。对于每个打开 "activation checkpointing" 的 nn.Module,其输入 activation 将会被保留,而其它中间 activation 在反向传播过程中被使用时会被重新计算。

```python
class CustomGraph(flow.nn.Graph):
Expand Down
4 changes: 2 additions & 2 deletions cn/docs/cookies/amp.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ loss_fn = nn.CrossEntropyLoss().to(DEVICE)
optimizer = flow.optim.SGD(model.parameters(), lr=1e-3)
```

如果要开启 AMP 模式,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中添加 `self.config.enable_amp(True)`,此 API 详见: [enable_amp](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.graph_config.GraphConfig.enable_amp)
如果要开启 AMP 模式,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中添加 `self.config.enable_amp(True)`,此 API 详见: [enable_amp](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.graph_config.GraphConfig.enable_amp.html)

```python
class CustomGraph(flow.nn.Graph):
Expand Down Expand Up @@ -61,7 +61,7 @@ for _ in range(100):

**Gradient Scaling (梯度缩放)** 是一种用于解决 FP16 易导致数值溢出问题的方法,其基本原理是在反向传播的过程中使用一个 scale factor 对损失和梯度进行缩放,以改变其数值的量级,从而尽可能缓解数值溢出问题。

OneFlow 提供了 `GradScaler` 来在 AMP 模式下使用 Gradient Scaling,只需要在 nn.Graph 模型的 `__init__` 方法中实例化一个`GradScaler` 对象,然后通过 [set_grad_scaler](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.Graph.set_grad_scaler) 接口进行指定即可,nn.Graph 将会自动管理 Gradient Scaling 的整个过程。以上文中的 `CustomGraph` 为例,我们需要在其 `__init__` 方法中添加:
OneFlow 提供了 `GradScaler` 来在 AMP 模式下使用 Gradient Scaling,只需要在 nn.Graph 模型的 `__init__` 方法中实例化一个`GradScaler` 对象,然后通过 [set_grad_scaler](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.Graph.set_grad_scaler.html) 接口进行指定即可,nn.Graph 将会自动管理 Gradient Scaling 的整个过程。以上文中的 `CustomGraph` 为例,我们需要在其 `__init__` 方法中添加:

```python
grad_scaler = flow.amp.GradScaler(
Expand Down
2 changes: 1 addition & 1 deletion en/docs/basics/02_tensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ oneflow.int32 cuda:0

## Operations on Tensors

A large number of operators are provided in OneFlow, most of which are in the namespaces of [oneflow](https://oneflow.readthedocs.io/en/v0.8.1/oneflow.html), [oneflow.Tensor](https://oneflow.readthedocs.io/en/v0.8.1/tensor.html), [oneflow.nn](https://oneflow.readthedocs.io/en/master/nn.html), and [oneflow.nn.functional](https://oneflow.readthedocs.io/en/v0.8.1/nn.functional.html).
A large number of operators are provided in OneFlow, most of which are in the namespaces of [oneflow](https://oneflow.readthedocs.io/en/v0.8.1/oneflow.html), [oneflow.Tensor](https://oneflow.readthedocs.io/en/v0.8.1/tensor.html), [oneflow.nn](https://oneflow.readthedocs.io/en/v0.8.1/nn.html), and [oneflow.nn.functional](https://oneflow.readthedocs.io/en/v0.8.1/nn.functional.html).

Tensors in OneFlow are as easy to use as the NumPy arrays. For example, slicing in NumPy style is supported:

Expand Down
2 changes: 1 addition & 1 deletion en/docs/cookies/activation_checkpointing.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ optimizer = flow.optim.SGD([{'params': model_part1.parameters()},
lr=1e-3)
```

To turn on activation checkpointing, you only need to specify `.config.activation_checkpointing = True` on the Eager model member (i.e. the nn.Module object) in the [nn.Graph](../basics/08_nn_graph.md) model. For more details of this API, please refer to: [activation_checkpointing](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing). For each nn.Module with "activation checkpointing" turned on, its input activations will be preserved, while other intermediate activations will be recomputed when used during backpropagation.
To turn on activation checkpointing, you only need to specify `.config.activation_checkpointing = True` on the Eager model member (i.e. the nn.Module object) in the [nn.Graph](../basics/08_nn_graph.md) model. For more details of this API, please refer to: [activation_checkpointing](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing.html). For each nn.Module with "activation checkpointing" turned on, its input activations will be preserved, while other intermediate activations will be recomputed when used during backpropagation.

```python
class CustomGraph(flow.nn.Graph):
Expand Down
4 changes: 2 additions & 2 deletions en/docs/cookies/amp.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ loss_fn = nn.CrossEntropyLoss().to(DEVICE)
optimizer = flow.optim.SGD(model.parameters(), lr=1e-3)
```

If you want to enable AMP mode, just add `self.config.enable_amp(True)` to the model [nn.Graph](../basics/08_nn_graph.md). The details of this API is at: [enable_amp](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.graph_config.GraphConfig.enable_amp).
If you want to enable AMP mode, just add `self.config.enable_amp(True)` to the model [nn.Graph](../basics/08_nn_graph.md). The details of this API is at: [enable_amp](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.graph_config.GraphConfig.enable_amp.html).

```python
class CustomGraph(flow.nn.Graph):
Expand Down Expand Up @@ -61,7 +61,7 @@ for _ in range(100):

**Gradient Scaling** is a method for solving the problem that FP16 is prone to numerical overflow. The basic principle is to use a scale factor to scale the loss and gradient in the process of backpropagation to change the magnitude of its value, thereby mitigate numerical overflow problems as much as possible.

OneFlow provides `GradScaler` to use Gradient Scaling in AMP mode. You only need to instantiate a `GradScaler` object in the `__init__` method of the nn.Graph model, and then specify it through the interface [set_grad_scaler](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.Graph.set_grad_scaler). nn.Graph will automatically manage the whole process of Gradient Scaling. Taking the `CustomGraph` above as an example, you need to add the following code to its `__init__` method:
OneFlow provides `GradScaler` to use Gradient Scaling in AMP mode. You only need to instantiate a `GradScaler` object in the `__init__` method of the nn.Graph model, and then specify it through the interface [set_grad_scaler](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.Graph.set_grad_scaler.html). nn.Graph will automatically manage the whole process of Gradient Scaling. Taking the `CustomGraph` above as an example, you need to add the following code to its `__init__` method:

```python
grad_scaler = flow.amp.GradScaler(
Expand Down

0 comments on commit 0d0fbb1

Please sign in to comment.