Skip to content

Commit 8c798e0

Browse files
samestepfacebook-github-bot
authored andcommitted
Forbid trailing whitespace (pytorch#53406)
Summary: Context: pytorch#53299 (comment) These are the only hand-written parts of this diff: - the addition to `.github/workflows/lint.yml` - the file endings changed in these four files (to appease FB-internal land-blocking lints): - `GLOSSARY.md` - `aten/src/ATen/core/op_registration/README.md` - `scripts/README.md` - `torch/csrc/jit/codegen/fuser/README.md` The rest was generated by running this command (on macOS): ``` git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//' ``` I looked over the auto-generated changes and didn't see anything that looked problematic. Pull Request resolved: pytorch#53406 Test Plan: This run (after adding the lint but before removing existing trailing spaces) failed: - https://github.com/pytorch/pytorch/runs/2043032377 This run (on the tip of this PR) succeeded: - https://github.com/pytorch/pytorch/runs/2043296348 Reviewed By: walterddr, seemethere Differential Revision: D26856620 Pulled By: samestep fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
1 parent cab2689 commit 8c798e0

File tree

238 files changed

+799
-798
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

238 files changed

+799
-798
lines changed

.circleci/scripts/binary_ios_test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,6 @@ rm cert.txt
2424
if ! [ -x "$(command -v xcodebuild)" ]; then
2525
echo 'Error: xcodebuild is not installed.'
2626
exit 1
27-
fi
27+
fi
2828
PROFILE=PyTorch_CI_2021
2929
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}

.github/workflows/lint.yml

+3
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ jobs:
4040
rm -r "shellcheck-${scversion}"
4141
shellcheck --version
4242
.jenkins/run-shellcheck.sh
43+
- name: Ensure no trailing spaces
44+
run: |
45+
(! git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' || (echo "The above files have trailing spaces; please remove them"; false))
4346
- name: Ensure no tabs
4447
run: |
4548
(! git grep -I -l $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above files have tabs; please convert them to spaces"; false))

.jenkins/caffe2/bench.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ if (( $num_gpus == 0 )); then
2121
fi
2222
if (( $num_gpus >= 1 )); then
2323
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 1
24-
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
24+
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
2525
# We can resume when we (1) bindmount the miopen cache folder in jenkins; (2) install the pre-compiled miopen kernel library in the docker
2626
# "$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 256 --epoch_size 25600 --num_epochs 2 --num_gpus 1 --float16_compute --dtype float16
2727
fi

CONTRIBUTING.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
159159
check whether your Git local or global config file contains any `submodule.*` settings. If yes, remove them and try again.
160160
(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-submoduleltnamegturl) for more info).
161161
162-
- If you encountered error such as
162+
- If you encountered error such as
163163
```
164164
fatal: unable to access 'https://github.com/pybind11/pybind11.git': could not load PEM client certificate ...
165165
```
@@ -169,11 +169,11 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
169169
openssl x509 -noout -in <cert_file> -dates
170170
```
171171
172-
- If you encountered error that some third_party modules are not checkout correctly, such as
172+
- If you encountered error that some third_party modules are not checkout correctly, such as
173173
```
174174
Could not find .../pytorch/third_party/pybind11/CMakeLists.txt
175175
```
176-
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.
176+
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.
177177
178178
## Nightly Checkout & Pull
179179

GLOSSARY.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# PyTorch Glossary
1+
# PyTorch Glossary
22

33
- [PyTorch Glossary](#pytorch-glossary)
44
- [Operation and Kernel](#operation-and-kernel)
@@ -39,7 +39,7 @@ For example, this
3939
to create Custom Operations.
4040

4141
## Kernel
42-
Implementation of a PyTorch operation, specifying what should be done when an
42+
Implementation of a PyTorch operation, specifying what should be done when an
4343
operation executes.
4444

4545
## Compound Operation
@@ -57,7 +57,7 @@ Same as Compound Operation.
5757
## Leaf Operation
5858
An operation that's considered a basic operation, as opposed to a Compound
5959
Operation. Leaf Operation always has dispatch functions defined, usually has a
60-
derivative function defined as well.
60+
derivative function defined as well.
6161

6262
## Device Kernel
6363
Device-specific kernel of a leaf operation.
@@ -79,4 +79,4 @@ using just-in-time compilation.
7979

8080
## Scripting
8181
Using `torch.jit.script` on a function to inspect source code and compile it as
82-
TorchScript code.
82+
TorchScript code.

aten/src/ATen/BatchingRegistrations.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -300,7 +300,7 @@ Tensor trace_backward_batching_rule(const Tensor& grad, IntArrayRef input_sizes)
300300
auto grad_input = at::zeros(grad_physical.getPhysicalShape(input_sizes), grad.options());
301301
// Batched Diagonal View
302302
auto grad_input_diag = at::diagonal(grad_input, /*offset*/0, /*dim1*/-2, /*dim2*/-1);
303-
// Append a dimension of size one to the grad output
303+
// Append a dimension of size one to the grad output
304304
auto grad_physical_tensor = grad_physical.tensor().unsqueeze(-1);
305305
grad_input_diag.copy_(grad_physical_tensor);
306306
return grad_physical.getPhysicalToLogicalMap().apply(grad_input);

aten/src/ATen/CPUGeneratorImpl.cpp

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ struct CPUGeneratorImplStateLegacy {
3838
* new data introduced in at::CPUGeneratorImpl and the legacy state. It is used
3939
* as a helper for torch.get_rng_state() and torch.set_rng_state()
4040
* functions.
41-
*/
41+
*/
4242
struct CPUGeneratorImplState {
4343
CPUGeneratorImplStateLegacy legacy_pod;
4444
float next_float_normal_sample;
@@ -119,7 +119,7 @@ uint64_t CPUGeneratorImpl::seed() {
119119
* must be a strided CPU byte tensor and of the same size as either
120120
* CPUGeneratorImplStateLegacy (for legacy CPU generator state) or
121121
* CPUGeneratorImplState (for new state).
122-
*
122+
*
123123
* FIXME: Remove support of the legacy state in the future?
124124
*/
125125
void CPUGeneratorImpl::set_state(const c10::TensorImpl& new_state) {

aten/src/ATen/SparseTensorUtils.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ TORCH_API Tensor flatten_indices(const Tensor& indices, IntArrayRef full_size, b
9494
// new_indices = [ 3, 1, 3 ] # uncoalesced
9595
TORCH_API Tensor flatten_indices_by_dims(const Tensor& indices, const IntArrayRef& sizes, const IntArrayRef& dims_to_flatten);
9696

97-
// Find the CSR representation for a row `indices` from the COO format
97+
// Find the CSR representation for a row `indices` from the COO format
9898
TORCH_API Tensor coo_to_csr(const int64_t* indices, int64_t dim, int64_t nnz);
9999

100100
}} // namespace at::sparse

aten/src/ATen/Version.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ std::string used_cpu_capability() {
114114
case native::CPUCapability::AVX2:
115115
ss << "AVX2";
116116
break;
117-
#endif
117+
#endif
118118
default:
119119
break;
120120
}

aten/src/ATen/VmapTransforms.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ using VmapDimVector = SmallVector<int64_t, kVmapStaticDimVecSize>;
4747
// argument.
4848

4949
// VmapTransform for operators that take tensors with multiple batch dims.
50-
// Given one or more logical views on Tensors, `logicalToPhysical`
50+
// Given one or more logical views on Tensors, `logicalToPhysical`
5151
// permutes all of the batch dims to the front of the tensor, aligns
5252
// and expands the batch dims to match each other (according to their `level`),
5353
// and returns a VmapPhysicalView on the tensor(s).

aten/src/ATen/core/Generator.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ namespace detail {
143143
/**
144144
* Helper function for checking the validity of new random generator
145145
* state. Right now following conditions are checked:
146-
*
146+
*
147147
* - The new state tensor must be a torch.ByteTensor
148148
* - Data of the new state tensor must be contiguous
149149
*/

aten/src/ATen/core/PhiloxRNGEngine.h

+7-7
Original file line numberDiff line numberDiff line change
@@ -40,13 +40,13 @@ typedef at::detail::Array<float, 2> FLOAT2;
4040
* Note that currently this implementation of the philox engine is not used
4141
* anywhere except for tests in cpu_generator_test.cpp. However, this engine
4242
* will replace curandStatePhilox4_32_10_t in the future.
43-
*
43+
*
4444
* The philox engine takes a seed value, a subsequeunce
4545
* for starting the generation and an offset for the subsequence.
46-
* Think of this engine as an algorithm producing a huge array. We are
47-
* parallelizing this array by partitioning the huge array and assigning
48-
* a thread index to each partition. In other words, each seed value
49-
* (there are 2^64 possible seed values) gives a sub array of size
46+
* Think of this engine as an algorithm producing a huge array. We are
47+
* parallelizing this array by partitioning the huge array and assigning
48+
* a thread index to each partition. In other words, each seed value
49+
* (there are 2^64 possible seed values) gives a sub array of size
5050
* 2^128 (each element in that array is a 128 bit number). Reasoning
5151
* behind the array being of size 2^128 is, there are 2^64 possible
5252
* thread index value and there is an array of size 2^64 for each of
@@ -59,9 +59,9 @@ typedef at::detail::Array<float, 2> FLOAT2;
5959
* seed: Seed values could be any number from 0 to 2^64-1.
6060
* subsequence: Subsequence is just the cuda thread indexing with:
6161
* - blockIdx.x * blockDim.x + threadIdx.x
62-
* offset: The offset variable in PhiloxEngine decides how many 128-bit
62+
* offset: The offset variable in PhiloxEngine decides how many 128-bit
6363
* random numbers to skip (i.e. how many groups of 4, 32-bit numbers to skip)
64-
* and hence really decides the total number of randoms that can be achieved
64+
* and hence really decides the total number of randoms that can be achieved
6565
* for the given subsequence.
6666
*/
6767

aten/src/ATen/core/op_registration/README.md

-2
Original file line numberDiff line numberDiff line change
@@ -254,5 +254,3 @@ Also, there's some requirements on the operator schema for it to be callable fro
254254
* Except for `Tensor` or `Tensor[]`, only arguments of type `int`, `double` and `bool` are supported. These can be in any position in the argument list and will be read from the caffe2 operator arguments, based on the argument name in the operator schema.
255255
* We do not support lists (`int[]`, `double[]` or `bool[]`) or optionals (`int?`, `double?`, `bool?`) yet.
256256
* The operator must return a single `Tensor` or multiple tensors as in `(Tensor, Tensor, Tensor)`. It cannot return a list `Tensor[]`, optional `Tensor?` or any primitive types.
257-
258-

aten/src/ATen/core/type.cpp

+24-24
Original file line numberDiff line numberDiff line change
@@ -1124,12 +1124,12 @@ std::string ClassType::getForwardPreHookErrorMessage(int pre_hook_idx) const {
11241124
const FunctionSchema& forward_schema = getMethod("forward").getSchema();
11251125
std::string input_types = getSchemaInputTypesString(forward_schema);
11261126
const std::vector<Argument>& forward_args = forward_schema.arguments();
1127-
1127+
11281128
std::string single_output = "";
11291129
if (forward_args.size() == 2 &&
11301130
forward_args[1].type()->cast<TupleType>() == nullptr) {
11311131
// if the output type is a single tuple, it needs to be wrapped in an outer tuple
1132-
// to match eager's behavior
1132+
// to match eager's behavior
11331133
single_output = ", '" + forward_args[1].type()->annotation_str() + "',";
11341134
}
11351135
std::string pre_hook_schema =
@@ -1138,17 +1138,17 @@ std::string ClassType::getForwardPreHookErrorMessage(int pre_hook_idx) const {
11381138
"This error occured while scripting the forward pre-hook '" +
11391139
pre_hook_name + "' on module '" + name()->name() +
11401140
"'. If you did not want to script this pre-hook remove it from the "
1141-
"original NN module before scripting. Pre-hooks for module '" +
1142-
name()->name() + "' are expected to have the following signature: "
1143-
+ pre_hook_schema + " with a return type of either 'None'" +
1141+
"original NN module before scripting. Pre-hooks for module '" +
1142+
name()->name() + "' are expected to have the following signature: "
1143+
+ pre_hook_schema + " with a return type of either 'None'" +
11441144
single_output + " or 'Tuple[" + input_types + "]'.";
11451145
return return_string;
11461146
}
11471147

11481148
std::string ClassType::getForwardHookErrorMessage(int hook_idx) const {
11491149
const std::string& hook_name = forward_hooks_[hook_idx]->name();
11501150
const FunctionSchema& forward_schema = getMethod("forward").getSchema();
1151-
std::string input_types = getSchemaInputTypesString(forward_schema);
1151+
std::string input_types = getSchemaInputTypesString(forward_schema);
11521152

11531153
// create expected output types string
11541154
const Argument& pre_output =
@@ -1160,33 +1160,33 @@ std::string ClassType::getForwardHookErrorMessage(int hook_idx) const {
11601160
std::string hook_schema = hook_name + "(self, input: Tuple[" +
11611161
input_types + "], output: " + output_types + ")";
11621162
std::string return_string =
1163-
"This error occured while scripting the forward hook '"
1163+
"This error occured while scripting the forward hook '"
11641164
+ hook_name + "' on module " + name()->name() +
11651165
". If you did not want to script this hook remove it from" +
11661166
" the original NN module before scripting. This hook was" +
11671167
" expected to have the following signature: " + hook_schema +
1168-
". The type of the output arg is the returned type from" +
1169-
" either the forward method or the previous hook if it exists. " +
1170-
"Note that hooks can return anything, but if the hook is " +
1168+
". The type of the output arg is the returned type from" +
1169+
" either the forward method or the previous hook if it exists. " +
1170+
"Note that hooks can return anything, but if the hook is " +
11711171
"on a submodule the outer module is expecting" +
11721172
" the same return type as the submodule's forward.";
11731173
return return_string;
11741174
}
11751175

11761176
void checkForwardHookInputArguments(
1177-
const FunctionSchema& forward_schema,
1178-
const FunctionSchema& hook_schema,
1179-
const std::string& hook_id,
1177+
const FunctionSchema& forward_schema,
1178+
const FunctionSchema& hook_schema,
1179+
const std::string& hook_id,
11801180
const std::string& hook_err_msg) {
11811181
// check for proper tuple input types
11821182
const std::vector<Argument>& forward_args = forward_schema.arguments();
11831183
const Argument input_arg = hook_schema.arguments()[1];
11841184
TORCH_CHECK(
1185-
input_arg.type()->cast<TupleType>() != nullptr,
1185+
input_arg.type()->cast<TupleType>() != nullptr,
11861186
hook_id,
11871187
"expected the input argument to be typed as a Tuple but found type: '",
1188-
input_arg.type()->annotation_str(),
1189-
"' instead.\n",
1188+
input_arg.type()->annotation_str(),
1189+
"' instead.\n",
11901190
hook_err_msg
11911191
);
11921192

@@ -1229,7 +1229,7 @@ void checkForwardHookInputArguments(
12291229
}
12301230

12311231
void ClassType::checkForwardPreHookSchema(
1232-
int pre_hook_idx,
1232+
int pre_hook_idx,
12331233
const FunctionSchema& pre_hook_schema) const {
12341234
const torch::jit::Function* pre_hook = forward_pre_hooks_[pre_hook_idx];
12351235
std::string hook_id =
@@ -1261,17 +1261,17 @@ void ClassType::checkForwardPreHookSchema(
12611261
pre_hook_err_msg
12621262
);
12631263
const Argument return_arg = pre_hook_schema.returns()[0];
1264-
std::string wrong_type_returned_err_msg = hook_id +
1264+
std::string wrong_type_returned_err_msg = hook_id +
12651265
"returned the wrong type of: '" +
12661266
return_arg.type()->annotation_str() + "'.";
12671267

12681268
if (return_arg.type()->kind() == NoneType::get()->kind()) {
12691269
return;
12701270
}
12711271
if (forward_args.size() == 2 && *forward_args[1].type() == *return_arg.type()) {
1272-
// TORCH_CHECK below is for the edge case where forward's input is a tuple and the
1272+
// TORCH_CHECK below is for the edge case where forward's input is a tuple and the
12731273
// pre-hook returns a matching tuple. Eager doesn't support this- the working eager return
1274-
// for a tuple type is the forward's input tuple wrapped inside of another tuple.
1274+
// for a tuple type is the forward's input tuple wrapped inside of another tuple.
12751275
TORCH_CHECK(
12761276
return_arg.type()->cast<TupleType>() == nullptr,
12771277
wrong_type_returned_err_msg,
@@ -1316,7 +1316,7 @@ void ClassType::checkForwardPreHookSchema(
13161316
for (int i = 1; i < forward_args.size(); ++i) {
13171317
if (*forward_args[i].type() != *return_tuple_types[i - 1]) {
13181318
TORCH_CHECK(
1319-
false,
1319+
false,
13201320
wrong_type_returned_err_msg,
13211321
" The returned tuple contains the wrong inner types.\n",
13221322
pre_hook_err_msg);
@@ -1325,7 +1325,7 @@ void ClassType::checkForwardPreHookSchema(
13251325
}
13261326

13271327
void ClassType::checkForwardHookSchema(
1328-
int hook_idx,
1328+
int hook_idx,
13291329
const FunctionSchema& hook_schema) const {
13301330
const torch::jit::Function* hook = forward_hooks_[hook_idx];
13311331
std::string hook_id =
@@ -1388,8 +1388,8 @@ torch::jit::Function& ClassType::getMethod(const std::string& name) const {
13881388
torch::jit::Function* ClassType::findHook(const std::string& name) const {
13891389
auto hook = findForwardHook(name);
13901390
if (hook == nullptr) {
1391-
hook = findForwardPreHook(name);
1392-
}
1391+
hook = findForwardPreHook(name);
1392+
}
13931393
return hook;
13941394
}
13951395

aten/src/ATen/cpu/vec256/vec256_double.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ template <> class Vec256<double> {
113113
const auto not_nan_mask = _mm256_cmp_pd(values, values, _CMP_EQ_OQ);
114114
const auto nan_mask = _mm256_cmp_pd(not_nan_mask, zero_vec, _CMP_EQ_OQ);
115115
const auto pi = _mm256_set1_pd(c10::pi<double>);
116-
116+
117117
const auto neg_mask = _mm256_cmp_pd(values, zero_vec, _CMP_LT_OQ);
118118
auto angle = _mm256_blendv_pd(zero_vec, pi, neg_mask);
119119
angle = _mm256_blendv_pd(angle, nan_vec, nan_mask);

aten/src/ATen/cpu/vec256/vec256_float.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ template <> class Vec256<float> {
120120
const auto not_nan_mask = _mm256_cmp_ps(values, values, _CMP_EQ_OQ);
121121
const auto nan_mask = _mm256_cmp_ps(not_nan_mask, zero_vec, _CMP_EQ_OQ);
122122
const auto pi = _mm256_set1_ps(c10::pi<float>);
123-
123+
124124
const auto neg_mask = _mm256_cmp_ps(values, zero_vec, _CMP_LT_OQ);
125125
auto angle = _mm256_blendv_ps(zero_vec, pi, neg_mask);
126126
angle = _mm256_blendv_ps(angle, nan_vec, nan_mask);

aten/src/ATen/cpu/vec256/vsx/vec256_complex_double_vsx.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ class Vec256<ComplexDbl> {
364364
}
365365

366366
Vec256<ComplexDbl> sqrt() const {
367-
return map(std::sqrt);
367+
return map(std::sqrt);
368368
}
369369

370370
Vec256<ComplexDbl> reciprocal() const {

aten/src/ATen/cpu/vec256/vsx/vec256_complex_float_vsx.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ class Vec256<ComplexFlt> {
417417
}
418418

419419
Vec256<ComplexFlt> sqrt() const {
420-
return map(std::sqrt);
420+
return map(std::sqrt);
421421
}
422422

423423
Vec256<ComplexFlt> reciprocal() const {

aten/src/ATen/cpu/vec256/vsx/vec256_double_vsx.h

+3-3
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ class Vec256<double> {
8282
blend(const Vec256<double>& a, const Vec256<double>& b) {
8383
return { a._vec0, b._vec1 };
8484
}
85-
85+
8686

8787
template <int64_t mask>
8888
static std::enable_if_t<blendChoiceDbl(mask) == 4, Vec256<double>> C10_ALWAYS_INLINE
@@ -206,7 +206,7 @@ class Vec256<double> {
206206
for (int i = 0; i < size()/2; i++) {
207207
ret._vec0[i] = f(_vec0[i], other._vec0[i]);
208208
}
209-
for (int i = 0; i < size()/2; i++) {
209+
for (int i = 0; i < size()/2; i++) {
210210
ret._vec1[i] = f(_vec1[i], other._vec1[i]);
211211
}
212212
return ret;
@@ -314,7 +314,7 @@ class Vec256<double> {
314314
Vec256<double> C10_ALWAYS_INLINE sqrt() const {
315315
return {vec_sqrt(_vec0), vec_sqrt(_vec1)};
316316
}
317-
Vec256<double> C10_ALWAYS_INLINE reciprocal() const {
317+
Vec256<double> C10_ALWAYS_INLINE reciprocal() const {
318318
return {
319319
vec_div(vd_one, _vec0), // vec_re(_vec0) is estimated one.
320320
vec_div(vd_one, _vec1)};

0 commit comments

Comments
 (0)