Skip to content

Commit 90d6421

Browse files
caisqtensorflower-gardener
authored andcommitted
Merge changes from github.
END_PUBLIC --- Commit d0f53f7 authored by Penghao Cen<[email protected]> Committed by Shanqing Cai<[email protected]>: Minor fix typo (tensorflow#11323) --- Commit 02fcf56 authored by Chris Song<[email protected]> Committed by Chris Song<[email protected]>: Fix misspells. --- Commit 764c9b6 authored by Louis Tiao<[email protected]> Committed by GitHub<[email protected]>: Fixed typo in docstring --- Commit f8cd128 authored by Shanqing Cai<[email protected]> Committed by Shanqing Cai<[email protected]>: Chaser --- Commit 01383b9 authored by Shanqing Cai<[email protected]> Committed by Shanqing Cai<[email protected]>: Adapt TensorFlowTestCase.setUp() to new reset_default_graph() semantics Avoid calling reset_default_graph() directly to prevent exceptions in cases where test methods error out from within nested graph contexts, which can leave _default_graph_stack non-empty in certain Python versions. --- Commit 0ffc378 authored by Amit Patankar<[email protected]> Committed by Amit Patankar<[email protected]>: Removing second declaration of functions. --- Commit f9c9cac authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Refactor ElementalIrEmitter's slice index finding code into IrArray::Index::SourceIndexOfSlice(). PiperOrigin-RevId: 161140653 --- Commit ba297ae authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Update ops-related pbtxt files. PiperOrigin-RevId: 161138258 --- Commit 68d6667 authored by Alexandre Passos<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Fixes a reentrant lock issue with tensors using ndarray memory which uses tensor memory. PiperOrigin-RevId: 161137788 --- Commit a2ee8bc authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Add support for int8 x int8 -> int32 matrix multiplication via cublasGemmEx to stream_executor. PiperOrigin-RevId: 161137741 --- Commit 755fa7b authored by Mark Daoust<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Block generate_test, and docs generating from running in python3. - Doc generation is currently unsupported in python3 - These both end in errors in python 3.5.1+ PiperOrigin-RevId: 161137467 --- Commit 97cbcac authored by Peter Hawkins<[email protected]> Committed by TensorFlower Gardener<[email protected]>: [TF:XLA] Fix failure in functionalize_control_flow rewrite for Enter nodes that are unused. Make sure we ignore such nodes without producing an error. PiperOrigin-RevId: 161136545 --- Commit dabcb60 authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: [XLA] Add reasonable error messages to Builder::Build for bad parameter numbers. PiperOrigin-RevId: 161136262 --- Commit 0cbd249 authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Add complex tensors support to `matrix_determinant`. PiperOrigin-RevId: 161132422 --- Commit 335f1f1 authored by A. Unique TensorFlower<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Extend static shape inference for SparseTensors with dense_shapes constructed using slicing. PiperOrigin-RevId: 161132391 --- Commit 5360491 authored by Jianwei Xie<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Fixed the missing labels test in TPUEstimator. PiperOrigin-RevId: 161131282 --- Commit 9f57dc8 authored by Bruno Rosa<[email protected]> Committed by Bruno Rosa<[email protected]>: Use mcpu instead of march for ppc64le march is not support by gcc on ppc64le --- Commit 7d5c74a authored by Skye Wanderman-Milne<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Move duplicate detection logic from Graph to FunctionLibraryDefinition Turns out this is more useful, since there are many function libraries that don't belong to a graph. This will be used in a future change. Note that this maintains the current behavior of Graph. In addition, updates FunctionDefsEqual() to handle unset attr entries (I ran into this when using this in said future change). PiperOrigin-RevId: 161126628 --- Commit 2caec3a authored by Shanqing Cai<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Disable more timeseries py tests failing in OSS PIP GPU builds PiperOrigin-RevId: 161124799 --- Commit 0b5cce3 authored by Eugene Brevdo<[email protected]> Committed by TensorFlower Gardener<[email protected]>: Get TopK op working on GPU again. Extend using cub's radix sort. 1. Undo rollback of Andreas Kirsch's initial implementation. 2. Use cub segmented radix sort if Andreas' heap-based impl for large k and small num_cols (thresholds of k=100, n=1000 determined empirically). 3. Use cub segmented radix sort if k == num_cols (this case is always faster). 4. Added benchmarks. Benchmarks show that the GPU implementation is up to 3x slower for small k but can be 10x faster for large num_cols and k. Benchmarks: Benchmark: m_128_n_10_k_5_use_gpu_False wall_time: 0.000166 s Throughput: 0.0077 GB/s Benchmark: m_128_n_10_k_5_use_gpu_True wall_time: 0.000796 s Throughput: 0.00161 GB/s Benchmark: m_128_n_10_k_9_use_gpu_False wall_time: 0.00017 s Throughput: 0.00751 GB/s Benchmark: m_128_n_10_k_9_use_gpu_True wall_time: 0.000796 s Throughput: 0.00161 GB/s Benchmark: m_128_n_10_k_10_use_gpu_False wall_time: 0.00017 s Throughput: 0.00753 GB/s Benchmark: m_128_n_10_k_10_use_gpu_True wall_time: 0.000775 s Throughput: 0.00165 GB/s Benchmark: m_128_n_100_k_1_use_gpu_False wall_time: 0.000155 s Throughput: 0.0826 GB/s Benchmark: m_128_n_100_k_1_use_gpu_True wall_time: 0.000796 s Throughput: 0.0161 GB/s Benchmark: m_128_n_100_k_50_use_gpu_False wall_time: 0.000247 s Throughput: 0.0519 GB/s Benchmark: m_128_n_100_k_50_use_gpu_True wall_time: 0.0008 s Throughput: 0.016 GB/s Benchmark: m_128_n_100_k_99_use_gpu_False wall_time: 0.000261 s Throughput: 0.049 GB/s Benchmark: m_128_n_100_k_99_use_gpu_True wall_time: 0.000794 s Throughput: 0.0161 GB/s Benchmark: m_128_n_100_k_100_use_gpu_False wall_time: 0.000239 s Throughput: 0.0536 GB/s Benchmark: m_128_n_100_k_100_use_gpu_True wall_time: 0.000777 s Throughput: 0.0165 GB/s Benchmark: m_128_n_1000_k_1_use_gpu_False wall_time: 0.000324 s Throughput: 0.395 GB/s Benchmark: m_128_n_1000_k_1_use_gpu_True wall_time: 0.000916 s Throughput: 0.14 GB/s Benchmark: m_128_n_1000_k_10_use_gpu_False wall_time: 0.00042 s Throughput: 0.305 GB/s Benchmark: m_128_n_1000_k_10_use_gpu_True wall_time: 0.000902 s Throughput: 0.142 GB/s Benchmark: m_128_n_1000_k_500_use_gpu_False wall_time: 0.0011 s Throughput: 0.116 GB/s Benchmark: m_128_n_1000_k_500_use_gpu_True wall_time: 0.00097 s Throughput: 0.132 GB/s Benchmark: m_128_n_1000_k_990_use_gpu_False wall_time: 0.00133 s Throughput: 0.0962 GB/s Benchmark: m_128_n_1000_k_990_use_gpu_True wall_time: 0.000993 s Throughput: 0.129 GB/s Benchmark: m_128_n_1000_k_1000_use_gpu_False wall_time: 0.00102 s Throughput: 0.126 GB/s Benchmark: m_128_n_1000_k_1000_use_gpu_True wall_time: 0.000964 s Throughput: 0.133 GB/s Benchmark: m_128_n_10000_k_10_use_gpu_False wall_time: 0.002 s Throughput: 0.64 GB/s Benchmark: m_128_n_10000_k_10_use_gpu_True wall_time: 0.00288 s Throughput: 0.445 GB/s Benchmark: m_128_n_10000_k_100_use_gpu_False wall_time: 0.00233 s Throughput: 0.549 GB/s Benchmark: m_128_n_10000_k_100_use_gpu_True wall_time: 0.00325 s Throughput: 0.394 GB/s Benchmark: m_128_n_10000_k_5000_use_gpu_False wall_time: 0.0127 s Throughput: 0.101 GB/s Benchmark: m_128_n_10000_k_5000_use_gpu_True wall_time: 0.00381 s Throughput: 0.336 GB/s Benchmark: m_128_n_10000_k_9900_use_gpu_False wall_time: 0.015 s Throughput: 0.0853 GB/s Benchmark: m_128_n_10000_k_9900_use_gpu_True wall_time: 0.00438 s Throughput: 0.292 GB/s Benchmark: m_128_n_10000_k_10000_use_gpu_False wall_time: 0.0104 s Throughput: 0.123 GB/s Benchmark: m_128_n_10000_k_10000_use_gpu_True wall_time: 0.00427 s Throughput: 0.3 GB/s Benchmark: m_128_n_100000_k_100_use_gpu_False wall_time: 0.0148 s Throughput: 0.865 GB/s Benchmark: m_128_n_100000_k_100_use_gpu_True wall_time: 0.0262 s Throughput: 0.488 GB/s Benchmark: m_128_n_100000_k_1000_use_gpu_False wall_time: 0.0201 s Throughput: 0.636 GB/s Benchmark: m_128_n_100000_k_1000_use_gpu_True wall_time: 0.0263 s Throughput: 0.486 GB/s Benchmark: m_128_n_100000_k_50000_use_gpu_False wall_time: 0.214 s Throughput: 0.0599 GB/s Benchmark: m_128_n_100000_k_50000_use_gpu_True wall_time: 0.0322 s Throughput: 0.398 GB/s Benchmark: m_128_n_100000_k_99000_use_gpu_False wall_time: 0.262 s Throughput: 0.0489 GB/s Benchmark: m_128_n_100000_k_99000_use_gpu_True wall_time: 0.0377 s Throughput: 0.34 GB/s Benchmark: m_128_n_100000_k_100000_use_gpu_False wall_time: 0.118 s Throughput: 0.108 GB/s Benchmark: m_128_n_100000_k_100000_use_gpu_True wall_time: 0.0365 s Throughput: 0.351 GB/s END_PUBLIC BEGIN_PUBLIC BEGIN_PUBLIC Automated g4 rollback of changelist 157169178 PiperOrigin-RevId: 161476569
1 parent aa23952 commit 90d6421

File tree

243 files changed

+2707
-814
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

243 files changed

+2707
-814
lines changed

ISSUE_TEMPLATE.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ If you open a GitHub issue, here is our policy:
1717
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
1818
- **TensorFlow installed from (source or binary)**:
1919
- **TensorFlow version (use command below)**:
20+
- **Python version**:
2021
- **Bazel version (if compiling from source)**:
2122
- **CUDA/cuDNN version**:
2223
- **GPU model and memory**:

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ or more CPUs or GPUs in a desktop, server, or mobile device without rewriting
1616
code. TensorFlow also includes TensorBoard, a data visualization toolkit.
1717

1818
TensorFlow was originally developed by researchers and engineers
19-
working on the Google Brain team within Google's Machine Intelligence research
19+
working on the Google Brain team within Google's Machine Intelligence Research
2020
organization for the purposes of conducting machine learning and deep neural
2121
networks research. The system is general enough to be applicable in a wide
2222
variety of other domains, as well.
@@ -34,13 +34,13 @@ and discussion.**
3434

3535
People who are a little more adventurous can also try our nightly binaries:
3636

37-
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
38-
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.0-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.0-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.0-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
39-
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
40-
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.0-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.0-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
41-
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
42-
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.0-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.0-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
43-
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
37+
* Linux CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/))
38+
* Linux GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-linux/)) / [Python 3.4](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp34-cp34m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-linux/)) / [Python 3.5](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-linux-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3.5,label=gpu-linux/))
39+
* Mac CPU-only: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
40+
* Mac GPU: [Python 2](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py2-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-mac/)) / [Python 3](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow_gpu-1.2.1-py3-none-any.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-mac-gpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-mac/))
41+
* Windows CPU-only: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows,PY=36/))
42+
* Windows GPU: [Python 3.5 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp35-cp35m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=35/)) / [Python 3.6 64-bit](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/lastSuccessfulBuild/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.2.1-cp36-cp36m-win_amd64.whl) ([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-win/M=windows-gpu,PY=36/))
43+
* Android: [demo APK](https://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/tensorflow_demo.apk), [native libs](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/native/)
4444
([build history](https://ci.tensorflow.org/view/Nightly/job/nightly-android/))
4545

4646
#### *Try your first TensorFlow program*

RELEASE.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
1+
# Release 1.2.1
2+
3+
## Bug Fixes and Other Changes
4+
* Updating markdown version required to >= 2.6.8.
5+
* Support tensors as dropout rates again, by removing the min(max(..))
6+
17
# Release 1.2.0
28

39
## Major Features and Improvements

configure

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,10 @@ function is_windows() {
2525
[[ "${PLATFORM}" =~ msys_nt*|mingw*|cygwin*|uwin* ]]
2626
}
2727

28+
function is_ppc64le() {
29+
[[ "${uname -m}" == "ppc64le" ]]
30+
}
31+
2832
function sed_in_place() {
2933
sed -e $1 $2 > "$2.bak"
3034
mv "$2.bak" $2
@@ -294,7 +298,12 @@ fi # TF_NEED_MKL
294298

295299
## Set up architecture-dependent optimization flags.
296300
if [ -z "$CC_OPT_FLAGS" ]; then
297-
default_cc_opt_flags="-march=native"
301+
if [ is_ppc64le ]; then
302+
# gcc on ppc64le does not support -march, use mcpu instead
303+
default_cc_opt_flags="-mcpu=native"
304+
else
305+
default_cc_opt_flags="-march=native"
306+
fi
298307
read -p "Please specify optimization flags to use during compilation when bazel option "\
299308
"\"--config=opt\" is specified [Default is $default_cc_opt_flags]: " CC_OPT_FLAGS
300309
if [ -z "$CC_OPT_FLAGS" ]; then

tensorflow/c/c_api_test.cc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -912,11 +912,13 @@ class CSession {
912912
for (TF_Operation* o : outputs) {
913913
outputs_.emplace_back(TF_Output{o, 0});
914914
}
915+
output_values_.resize(outputs_.size());
915916
}
916917

917918
void SetOutputs(const std::vector<TF_Output>& outputs) {
918919
ResetOutputValues();
919920
outputs_ = outputs;
921+
output_values_.resize(outputs_.size());
920922
}
921923

922924
void SetTargets(std::initializer_list<TF_Operation*> targets) {

tensorflow/cc/framework/gradients.cc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -152,12 +152,12 @@ Status SymbolicGradientBuilder::Initialize() {
152152
grad_outputs_->resize(inputs_.size());
153153
// Populate `output_nodes_` from node ids in `outputs_`.
154154
output_nodes_.reserve(outputs_.size());
155-
for (int i = 0; i < outputs_.size(); ++i) {
155+
for (size_t i = 0; i < outputs_.size(); ++i) {
156156
output_nodes_.insert(outputs_[i].node()->id());
157157
}
158158
// Populate `input_nodes_` from Outputs in `inputs_`.
159159
input_nodes_.reserve(inputs_.size());
160-
for (int i = 0; i < inputs_.size(); ++i) {
160+
for (size_t i = 0; i < inputs_.size(); ++i) {
161161
input_nodes_.insert({inputs_[i], i});
162162
}
163163

@@ -341,7 +341,7 @@ Status SymbolicGradientBuilder::AddGradients() {
341341
// gradient function to the src node/output to which it should be
342342
// backproped. Maybe grad functions can return a vector of Output pairs to
343343
// make this association explicit.
344-
int dx_index = 0;
344+
size_t dx_index = 0;
345345
for (const Edge* e : n->in_edges()) {
346346
if (e->IsControlEdge()) continue;
347347
if (dx_index == dx.size()) {

tensorflow/cc/gradients/math_grad.cc

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -203,6 +203,46 @@ Status TanhGrad(const Scope& scope, const Operation& op,
203203
}
204204
REGISTER_GRADIENT_OP("Tanh", TanhGrad);
205205

206+
Status AsinhGrad(const Scope& scope, const Operation& op,
207+
const std::vector<Output>& grad_inputs,
208+
std::vector<Output>* grad_outputs) {
209+
// y = asinh(x)
210+
// dy/dx = 1 / cosh(y)
211+
auto dydx = Reciprocal(scope, Cosh(scope, op.output(0)));
212+
// grad(x) = grad(y) * conj(dy/dx)
213+
grad_outputs->push_back(
214+
Mul(scope, grad_inputs[0], ConjugateHelper(scope, dydx)));
215+
return scope.status();
216+
}
217+
REGISTER_GRADIENT_OP("Asinh", AsinhGrad);
218+
219+
Status AcoshGrad(const Scope& scope, const Operation& op,
220+
const std::vector<Output>& grad_inputs,
221+
std::vector<Output>* grad_outputs) {
222+
// y = acosh(x)
223+
// dy/dx = 1 / sinh(y)
224+
auto dydx = Reciprocal(scope, Sinh(scope, op.output(0)));
225+
// grad(x) = grad(y) * conj(dy/dx)
226+
grad_outputs->push_back(
227+
Mul(scope, grad_inputs[0], ConjugateHelper(scope, dydx)));
228+
return scope.status();
229+
}
230+
REGISTER_GRADIENT_OP("Acosh", AcoshGrad);
231+
232+
Status AtanhGrad(const Scope& scope, const Operation& op,
233+
const std::vector<Output>& grad_inputs,
234+
std::vector<Output>* grad_outputs) {
235+
// y = atanh(x)
236+
// dy/dx = 1 / (1 - x^2)
237+
auto one = Cast(scope, Const(scope, 1.0), op.input(0).type());
238+
auto dydx = Reciprocal(scope, Sub(scope, one, Square(scope, op.input(0))));
239+
// grad(x) = grad(y) * conj(dy/dx)
240+
grad_outputs->push_back(
241+
Mul(scope, grad_inputs[0], ConjugateHelper(scope, dydx)));
242+
return scope.status();
243+
}
244+
REGISTER_GRADIENT_OP("Atanh", AtanhGrad);
245+
206246
Status SigmoidGrad(const Scope& scope, const Operation& op,
207247
const std::vector<Output>& grad_inputs,
208248
std::vector<Output>* grad_outputs) {

tensorflow/cc/gradients/math_grad_test.cc

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,9 @@ class CWiseUnaryGradTest : public ::testing::Test {
4848
SINH,
4949
COSH,
5050
TANH,
51+
ASINH,
52+
ACOSH,
53+
ATANH,
5154
SIGMOID,
5255
SIGN,
5356
SIN,
@@ -122,6 +125,15 @@ class CWiseUnaryGradTest : public ::testing::Test {
122125
case TANH:
123126
y = Tanh(scope_, x);
124127
break;
128+
case ASINH:
129+
y = Asinh(scope_, x);
130+
break;
131+
case ACOSH:
132+
y = Acosh(scope_, x);
133+
break;
134+
case ATANH:
135+
y = Atanh(scope_, x);
136+
break;
125137
case SIGMOID:
126138
y = Sigmoid(scope_, x);
127139
break;
@@ -413,6 +425,76 @@ TEST_F(CWiseUnaryGradTest, Tanh_Complex) {
413425
TestCWiseGrad<complex64>(TANH, x_fn, dy_fn, dx_fn);
414426
}
415427

428+
TEST_F(CWiseUnaryGradTest, Asinh) {
429+
auto x_fn = [this](const int i) { return RV({0, -1, 1, -2, 2, -3, 3}); };
430+
auto dy_fn = [this](const float x) { return x + RV({-2, 2, -3, 3, -4, 4}); };
431+
auto dx_fn = [this](const float x, const float dy) {
432+
auto y = std::asinh(x);
433+
return dy / std::cosh(y);
434+
};
435+
TestCWiseGrad<float>(ASINH, x_fn, dy_fn, dx_fn);
436+
}
437+
438+
TEST_F(CWiseUnaryGradTest, Asinh_Complex) {
439+
auto x_fn = [this](const int i) {
440+
return CRV({{1, 0}, {0, 1}, {2, -1}, {1, 2}, {3, 4}});
441+
};
442+
auto dy_fn = [this](const complex64& x) {
443+
return x + CRV({{-2, 2}, {-3, 3}, {1, -4}});
444+
};
445+
auto dx_fn = [this](const complex64& x, const complex64& dy) {
446+
auto y = std::asinh(x);
447+
return dy / conjugate(std::cosh(y));
448+
};
449+
TestCWiseGrad<complex64>(ASINH, x_fn, dy_fn, dx_fn);
450+
}
451+
452+
TEST_F(CWiseUnaryGradTest, Acosh) {
453+
auto x_fn = [this](const int i) { return RV({1, 2, 3, 4, 5, 6, 7}); };
454+
auto dy_fn = [this](const float x) { return x + RV({8, 9, 10, 11, 12, 13, 14}); };
455+
auto dx_fn = [this](const float x, const float dy) {
456+
auto y = std::acosh(x);
457+
return dy / std::sinh(y);
458+
};
459+
TestCWiseGrad<float>(ACOSH, x_fn, dy_fn, dx_fn);
460+
}
461+
462+
TEST_F(CWiseUnaryGradTest, Acosh_Complex) {
463+
auto x_fn = [this](const int i) {
464+
return CRV({{1, 1}, {2, 1}, {1, 4}, {1, 2}, {3, 4}});
465+
};
466+
auto dy_fn = [this](const complex64& x) {
467+
return x + CRV({{2, 2}, {3, 3}, {1, 4}});
468+
};
469+
auto dx_fn = [this](const complex64& x, const complex64& dy) {
470+
auto y = std::acosh(x);
471+
return dy / conjugate(std::sinh(y));
472+
};
473+
TestCWiseGrad<complex64>(ACOSH, x_fn, dy_fn, dx_fn);
474+
}
475+
476+
TEST_F(CWiseUnaryGradTest, Atanh) {
477+
auto x_fn = [this](const int i) { return RV({0, -0.5, 0.5, -0.1, 0.1}); };
478+
auto dy_fn = [this](const float x) { return x + RV({-2, 2, -3, 3, -4, 4}); };
479+
auto dx_fn = [this](const float x, const float dy) {
480+
return dy * (1. / (1. - x * x));
481+
};
482+
TestCWiseGrad<float>(ATANH, x_fn, dy_fn, dx_fn);
483+
}
484+
485+
TEST_F(CWiseUnaryGradTest, Atanh_Complex) {
486+
auto x_fn = [this](const int i) {
487+
return CRV({{0.1, 0}, {0, 0.1}, {0.2, -0.1}, {0.1, 0.2}, {0.3, 0.4}});
488+
};
489+
auto dy_fn = [this](const complex64& x) {
490+
return x + CRV({{-2, 2}, {-3, 3}, {1, -4}});
491+
};
492+
auto dx_fn = [this](const complex64& x, const complex64& dy) {
493+
return dy / conjugate(one_ - x * x);
494+
};
495+
TestCWiseGrad<complex64>(ATANH, x_fn, dy_fn, dx_fn);
496+
}
497+
416498
TEST_F(CWiseUnaryGradTest, Sigmoid) {
417499
auto x_fn = [this](const int i) { return RV({0, -1, 1, -2, 2, -3, 3}); };
418500
auto dy_fn = [this](const float x) { return x + RV({-2, 2, -3, 3, -4, 4}); };

tensorflow/compiler/tf2xla/functionalize_control_flow.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -383,7 +383,7 @@ Status FunctionalizeLoop(Graph* graph, Frame* frame,
383383
}
384384
}
385385
if (arg.exit == nullptr) {
386-
return errors::InvalidArgument("Mising Exit successor to ",
386+
return errors::InvalidArgument("Missing Exit successor to ",
387387
arg.switch_node->name());
388388
}
389389
}

0 commit comments

Comments
 (0)