Skip to content

Commit 05708b9

Browse files
wanglezzEcho-Nie
andauthored
[CodeStyle][Typos][F-[3-13]] Fix typo(feeded, ilter, fliters, flaot, follwing, formated, formater, forword, foward, funtion, functinal, fundemental) (#7619)
* fix typo F-[3-13] * add specific words feeded * restore change to feeded --------- Co-authored-by: Echo-Nie <[email protected]>
1 parent bfde2fc commit 05708b9

File tree

17 files changed

+17
-28
lines changed

17 files changed

+17
-28
lines changed

_typos.toml

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ arange = "arange"
2525
unsupport = "unsupport"
2626
Nervana = "Nervana"
2727
datas = "datas"
28+
feeded = "feeded"
2829

2930
# These words need to be fixed
3031
Learing = "Learing"
@@ -39,18 +40,6 @@ dimention = "dimention"
3940
dimentions = "dimentions"
4041
dirrectories = "dirrectories"
4142
disucssion = "disucssion"
42-
feeded = "feeded"
43-
flaot = "flaot"
44-
fliters = "fliters"
45-
follwing = "follwing"
46-
formated = "formated"
47-
formater = "formater"
48-
forword = "forword"
49-
foward = "foward"
50-
functinal = "functinal"
51-
fundemental = "fundemental"
52-
funtion = "funtion"
53-
ilter = "ilter"
5443
inferface = "inferface"
5544
infor = "infor"
5645
instert = "instert"

docs/api/paddle/incubate/xpu/resnet_block/ResNetBasicBlock_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
ResNetBasicBlock
44
-------------------------------
5-
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, ilter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
5+
.. py:class:: paddle.incubate.xpu.resnet_block.ResNetBasicBlock(num_channels1, num_filter1, filter1_size, num_channels2, num_filter2, filter2_size, num_channels3, num_filter3, filter3_size, stride1=1, stride2=1, stride3=1, act='relu', momentum=0.9, eps=1e-5, data_format='NCHW', has_shortcut=False, use_global_stats=False, is_test=False, filter1_attr=None, scale1_attr=None, bias1_attr=None, moving_mean1_name=None, moving_var1_name=None, filter2_attr=None, scale2_attr=None, bias2_attr=None, moving_mean2_name=None, moving_var2_name=None, filter3_attr=None, scale3_attr=None, bias3_attr=None, moving_mean3_name=None, moving_var3_name=None, padding1=0, padding2=0, padding3=0, dilation1=1, dilation2=1, dilation3=1, trainable_statistics=False, find_conv_max=True)
66
77
该接口用于构建 ``ResNetBasicBlock`` 类的一个可调用对象,实现一次性计算多个 ``Conv2D``、 ``BatchNorm`` 和 ``ReLU`` 的功能,排列顺序参见源码链接。
88

docs/api/paddle/nn/GRU_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ GRU
3535
- **input_size** (int) - 输入 :math:`x` 的大小。
3636
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
3737
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
38-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
38+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
3939
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
4040
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
4141
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。

docs/api/paddle/nn/LSTM_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ LSTM
4343
- **input_size** (int) - 输入 :math:`x` 的大小。
4444
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
4545
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
46-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
46+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
4747
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps, batch_size, input_size],否则为[batch_size, time_steps, input_size]。`time_steps` 指输入序列的长度。默认为 False。
4848
- **dropout** (float,可选) - dropout 概率,指的是除第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
4949
- **weight_ih_attr** (ParamAttr,可选) - weight_ih 的参数。默认为 None。

docs/api/paddle/nn/SimpleRNN_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ SimpleRNN
2525
- **input_size** (int) - 输入 :math:`x` 的大小。
2626
- **hidden_size** (int) - 隐藏状态 :math:`h` 大小。
2727
- **num_layers** (int,可选) - 循环网络的层数。例如,将层数设为 2,会将两层 GRU 网络堆叠在一起,第二层的输入来自第一层的输出。默认为 1。
28-
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。foward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
28+
- **direction** (str,可选) - 网络迭代方向,可设置为 forward 或 bidirect(或 bidirectional)。forward 指从序列开始到序列结束的单向 GRU 网络方向,bidirectional 指从序列开始到序列结束,又从序列结束到开始的双向 GRU 网络方向。默认为 forward。
2929
- **time_major** (bool,可选) - 指定 input 的第一个维度是否是 time steps。如果 time_major 为 True,则 Tensor 的形状为[time_steps,batch_size,input_size],否则为[batch_size,time_steps,input_size]。`time_steps` 指输入序列的长度。默认为 False。
3030
- **dropout** (float,可选) - dropout 概率,指的是出第一层外每层输入时的 dropout 概率。范围为[0, 1]。默认为 0。
3131
- **activation** (str,可选) - 网络中每个单元的激活函数。可以是 tanh 或 relu。默认为 tanh。

docs/api/paddle/static/nn/batch_norm_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ moving_mean 和 moving_var 是训练过程中统计得到的全局均值和方
4848
参数
4949
::::::::::::
5050

51-
- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:flaot16, float32, float64。
51+
- **input** (Tensor) - batch_norm 算子的输入特征,是一个 Tensor 类型,输入维度可以是 2, 3, 4, 5。数据类型:float16, float32, float64。
5252
- **act** (string)- 激活函数类型,可以是 leaky_realu、relu、prelu 等。默认:None。
5353
- **is_test** (bool) - 指示它是否在测试阶段,非训练阶段使用训练过程中统计到的全局均值和全局方差。默认:False。
5454
- **momentum** (float|Tensor)- 此值用于计算 moving_mean 和 moving_var,是一个 float 类型或者一个 shape 为[1],数据类型为 float32 的 Tensor 类型。更新公式为::math:`moving\_mean = moving\_mean * momentum + new\_mean * (1. - momentum)` , :math:`moving\_var = moving\_var * momentum + new\_var * (1. - momentum)`,默认:0.9。

docs/api/paddle/static/nn/conv3d_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ conv3d
7676
::::::::::::
7777

7878
- **input** (Tensor) - 形状为 :math:`[N, C, D, H, W]` 或 :math:`[N, D, H, W, C]` 的 5-D Tensor,N 是批尺寸,C 是通道数,D 是特征深度,H 是特征高度,W 是特征宽度,数据类型为 float16, float32 或 float64。
79-
- **num_fliters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
79+
- **num_filters** (int) - 滤波器(卷积核)的个数。和输出图像通道相同。
8080
- **filter_size** (int|list|tuple) - 滤波器大小。如果它是一个列表或元组,则必须包含三个整数值:(filter_size_depth, filter_size_height,filter_size_width)。若为一个整数,则 filter_size_depth = filter_size_height = filter_size_width = filter_size。
8181
- **stride** (int|list|tuple,可选) - 步长大小。滤波器和输入进行卷积计算时滑动的步长。如果它是一个列表或元组,则必须包含三个整型数:(stride_depth, stride_height, stride_width)。若为一个整数,stride_depth = stride_height = stride_width = stride。默认值:1。
8282
- **padding** (int|list|tuple|str,可选) - 填充大小。如果它是一个字符串,可以是"VALID"或者"SAME",表示填充算法,计算细节可参考上述 ``padding`` = "SAME"或 ``padding`` = "VALID" 时的计算公式。如果它是一个元组或列表,它可以有 3 种格式:

docs/design/concepts/tensor_array.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ Since each step of RNN can only take a tensor-represented batch of data as input
218218
some preprocess should be taken on the inputs such as sorting the sentences by their length in descending order and cut each word and pack to new batches.
219219

220220
Such cut-like operations can be embedded into `TensorArray` as general methods called `unpack` and `pack`,
221-
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formated as a LoD tensor rather than a tensor.
221+
these two operations are similar to `stack` and `unstack` except that they operate on variable-length sequences formatted as a LoD tensor rather than a tensor.
222222

223223
Some definitions are like
224224

docs/design/concurrent/channel.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Introduction
44

55
A Channel is a data structure that allows for synchronous interprocess
6-
communication via message passing. It is a fundemental component of CSP
6+
communication via message passing. It is a fundamental component of CSP
77
(communicating sequential processes), and allows for users to pass data
88
between threads without having to worry about synchronization.
99

docs/design/mkldnn/inplace/inplace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Pattern is restricted so that in-placed to be op is of oneDNN type. Due to fact
5656
more than one input and their output may be consumed by more than one operator it is expected that pattern
5757
maybe detected multiple times for the same operator e.g. once for one input, then for second input etc..
5858

59-
Just having oneDNN operator capable of in-place is not enough to have in-place execution enabled, hence follwing rules
59+
Just having oneDNN operator capable of in-place is not enough to have in-place execution enabled, hence following rules
6060
are checked by oneDNN in-place pass:
6161
1. If input node to in-place operator is also an input to different operator, then in-place computation cannot be performed, as there is a risk that other operator consuming in-placed op operator will be executed after in-placed operator and therefore get invalid input data (overwritten by in-place computation).
6262
2. If after in-placed operator there is another operator that is reusing in-place op's input var then in-place cannot happen unless next op can perform in-place computation. Next picture presents the idea.

0 commit comments

Comments
 (0)