From cf29ac5eacc34357eb8e5318fab497e48906ade2 Mon Sep 17 00:00:00 2001 From: TensorFlow Release Automation Date: Wed, 21 Aug 2019 13:32:47 -0700 Subject: [PATCH 01/15] Insert release notes place-fill --- RELEASE.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/RELEASE.md b/RELEASE.md index 801b9c8a2c8e5e..22ca021f64f085 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,3 +1,7 @@ +# Release 1.15.0 + + + # Release 1.14.0 ## Major Features and Improvements From 7d1a98d91c0a567fd7c47b7e69b2f82879f196ba Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Tue, 27 Aug 2019 14:04:29 -0700 Subject: [PATCH 02/15] Update RELEASE.md --- RELEASE.md | 78 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index 22ca021f64f085..1832510151110a 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,6 +1,82 @@ # Release 1.15.0 - +## Major Features and Improvements + +## Breaking Changes + +## Bug Fixes and Other Changes + +* Promoting `unbatch` from experimental to core API. +* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`. +* EagerTensor now support buffer interface for tensors. +* This change bumps the version number of the FullyConnected Op to 5. +* tensorflow : crash when pointer become nullptr. +* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`. +* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores. +* parallel_for: Add converter for `MatrixDiag`. +* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function. +* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3. +* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. +* Added new op: `tf.strings.unsorted_segment_join`. +* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`. +* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow) +* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets. +* Add HW acceleration support for topK_v2 +* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead. +* Add new `TypeSpec` classes +* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0 +* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. +* Expose Head as public API. +* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions. +* Update docstring for gather to properly describe the non-empty batch_dims case. +* Added `tf.sparse.from_dense` utility function. +* Add `GATHER` support to NN API delegate +* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. +* Improved ragged tensor support in `TensorFlowTestCase`. +* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not. +* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. +* `ResizeInputTensor` now works for all delegates +* Start of open development of TF, TFLite, XLA MLIR dialects. +* Add `EXPAND_DIMS` support to NN API delegate TEST: expand_dims_test +* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources. +* Add support of local soft device placement for eager op. +* Pass partial_pivoting to the `_TridiagonalSolveGrad`. +* Add HW acceleration support for `LogSoftMax`. +* Added a function nested_value_rowids for ragged tensors. +* fixed a bug in histogram_op.cc. +* Add guard to avoid acceleration of L2 Normalization with input rank != 4 +* Added evaluation script for COCO minival +* Add delegate support for QUANTIZE +* tflite object detection script has a debug mode +* Add `tf.math.cumulative_logsumexp operation`. +* Add `tf.ragged.stack`. +* Add delegate support for `QUANTIZED_16BIT_LSTM`. +* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights, allowing a dramatic speedup for large sparse models. +* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. +* Refactors code in Quant8 LSTM support to reduce TFLite binary size. +* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. +* Fix memory allocation problem when calling `AddNewInputConstantTensor`. +* Delegate application failure leaves interpreter in valid state. +* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile. +* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow. +* Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. +* Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`. +* Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc +* Added support for `FusedBatchNormV3` in converter. +* A ragged to dense op for directly calculating tensors. +* Converts hardswish subgraphs into atomic ops. +* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. +* Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence. +* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. +* Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). +* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. +* Fix accidental quadratic graph construction cost in graph-mode `tf.gradients()`. + +## Thanks to our Contributors + +This release contains contributions from many people at Google, as well as: + +a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua Wang) # Release 1.14.0 From 38c19bf781b89839b73386ec0a6966930dc0150b Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Thu, 5 Sep 2019 21:19:12 -0700 Subject: [PATCH 03/15] Update RELEASE.md --- RELEASE.md | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 1832510151110a..440407c39c95d3 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,28 +1,29 @@ # Release 1.15.0 - -## Major Features and Improvements - -## Breaking Changes +This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ## Bug Fixes and Other Changes - +* `tf.keras`: + * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`. + * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function. + * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`. + * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead. + * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. + * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. + * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights, allowing a dramatic speedup for large sparse models. + * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile. + * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. + * Promoting `unbatch` from experimental to core API. -* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`. +* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * EagerTensor now support buffer interface for tensors. * This change bumps the version number of the FullyConnected Op to 5. -* tensorflow : crash when pointer become nullptr. -* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`. * Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores. * parallel_for: Add converter for `MatrixDiag`. -* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function. * Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3. -* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. * Added new op: `tf.strings.unsorted_segment_join`. -* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`. * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow) * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets. -* Add HW acceleration support for topK_v2 -* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead. +* Add HW acceleration support for `topK_v2`. * Add new `TypeSpec` classes * CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0 * Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. @@ -34,7 +35,6 @@ * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. * Improved ragged tensor support in `TensorFlowTestCase`. * Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not. -* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. * `ResizeInputTensor` now works for all delegates * Start of open development of TF, TFLite, XLA MLIR dialects. * Add `EXPAND_DIMS` support to NN API delegate TEST: expand_dims_test @@ -51,13 +51,10 @@ * Add `tf.math.cumulative_logsumexp operation`. * Add `tf.ragged.stack`. * Add delegate support for `QUANTIZED_16BIT_LSTM`. -* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights, allowing a dramatic speedup for large sparse models. * `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. * Refactors code in Quant8 LSTM support to reduce TFLite binary size. -* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. * Fix memory allocation problem when calling `AddNewInputConstantTensor`. * Delegate application failure leaves interpreter in valid state. -* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile. * `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow. * Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. * Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`. From c6c50019437edcff0948db0f9b2fea98604a80df Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Thu, 5 Sep 2019 21:39:16 -0700 Subject: [PATCH 04/15] Update RELEASE.md --- RELEASE.md | 50 ++++++++++++++++++++++++++++---------------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 440407c39c95d3..ddeee7bcc9f3fc 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -1,37 +1,52 @@ # Release 1.15.0 -This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. +This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. + +## Major Features and Improvements +* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function. +This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. + +* EagerTensor now supports buffer interface for tensors. +* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. +* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. + +## Breaking Changes +* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. +* `tf.keras`: + * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. + * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. ## Bug Fixes and Other Changes +* `tf.data`: + * Promoting `unbatch` from experimental to core API. + * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets. * `tf.keras`: * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`. * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function. - * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`. * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead. * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights, allowing a dramatic speedup for large sparse models. * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile. - * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. - -* Promoting `unbatch` from experimental to core API. -* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. -* EagerTensor now support buffer interface for tensors. -* This change bumps the version number of the FullyConnected Op to 5. + * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence. +* `tf.lite` + * Add `GATHER` support to NN API delegate. + * tflite object detection script has a debug mode. + * Add delegate support for QUANTIZE. + * Added evaluation script for COCO minival. + * Add delegate support for `QUANTIZED_16BIT_LSTM`. + * Converts hardswish subgraphs into atomic ops. * Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores. -* parallel_for: Add converter for `MatrixDiag`. +* `parallel_for`: Add converter for `MatrixDiag`. * Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3. * Added new op: `tf.strings.unsorted_segment_join`. -* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow) -* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets. * Add HW acceleration support for `topK_v2`. -* Add new `TypeSpec` classes +* Add new `TypeSpec` classes. * CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0 * Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. * Expose Head as public API. * AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions. * Update docstring for gather to properly describe the non-empty batch_dims case. * Added `tf.sparse.from_dense` utility function. -* Add `GATHER` support to NN API delegate * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. * Improved ragged tensor support in `TensorFlowTestCase`. * Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not. @@ -45,28 +60,19 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x * Added a function nested_value_rowids for ragged tensors. * fixed a bug in histogram_op.cc. * Add guard to avoid acceleration of L2 Normalization with input rank != 4 -* Added evaluation script for COCO minival -* Add delegate support for QUANTIZE -* tflite object detection script has a debug mode * Add `tf.math.cumulative_logsumexp operation`. * Add `tf.ragged.stack`. -* Add delegate support for `QUANTIZED_16BIT_LSTM`. * `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. * Refactors code in Quant8 LSTM support to reduce TFLite binary size. * Fix memory allocation problem when calling `AddNewInputConstantTensor`. * Delegate application failure leaves interpreter in valid state. * `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow. -* Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. * Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`. * Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc * Added support for `FusedBatchNormV3` in converter. * A ragged to dense op for directly calculating tensors. -* Converts hardswish subgraphs into atomic ops. -* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. -* Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence. * The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). -* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. * Fix accidental quadratic graph construction cost in graph-mode `tf.gradients()`. ## Thanks to our Contributors From 9c4f2e95468da1d2cc8c8ccd3674ff1cca5a24f8 Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Thu, 5 Sep 2019 21:40:19 -0700 Subject: [PATCH 05/15] Update RELEASE.md --- RELEASE.md | 1 - 1 file changed, 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index ddeee7bcc9f3fc..342eb6fd2b55c1 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -4,7 +4,6 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x ## Major Features and Improvements * TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function. This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. - * EagerTensor now supports buffer interface for tensors. * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. From 3c79bb4a38de0ecb10baf488b9464336c6e14d5d Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Thu, 5 Sep 2019 21:50:59 -0700 Subject: [PATCH 06/15] Update RELEASE.md --- RELEASE.md | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 342eb6fd2b55c1..6f2b1c6a18c466 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -7,12 +7,21 @@ This enables writing forward compatible code: by explicitly importing either ten * EagerTensor now supports buffer interface for tensors. * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. +* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. +* Adds enable_tensor_equality(), which switches the behavior such that: + Tensors are no longer hashable + Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0 ## Breaking Changes * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. +* TensorFlow 1.15 is built using devtoolset7 on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. +* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. * `tf.keras`: * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. + * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. + * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. + * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). ## Bug Fixes and Other Changes * `tf.data`: @@ -22,8 +31,6 @@ This enables writing forward compatible code: by explicitly importing either ten * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`. * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function. * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead. - * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. - * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights, allowing a dramatic speedup for large sparse models. * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile. * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence. @@ -41,18 +48,17 @@ This enables writing forward compatible code: by explicitly importing either ten * Add HW acceleration support for `topK_v2`. * Add new `TypeSpec` classes. * CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0 -* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. -* Expose Head as public API. -* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions. +* Expose `Head` as public API. * Update docstring for gather to properly describe the non-empty batch_dims case. * Added `tf.sparse.from_dense` utility function. -* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. * Improved ragged tensor support in `TensorFlowTestCase`. * Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not. -* `ResizeInputTensor` now works for all delegates -* Start of open development of TF, TFLite, XLA MLIR dialects. +* `ResizeInputTensor` now works for all delegates. * Add `EXPAND_DIMS` support to NN API delegate TEST: expand_dims_test * `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources. +* `tf.cond`, `tf.while` and `if` and `while` in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow. +* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. +* Refactors code in Quant8 LSTM support to reduce TFLite binary size. * Add support of local soft device placement for eager op. * Pass partial_pivoting to the `_TridiagonalSolveGrad`. * Add HW acceleration support for `LogSoftMax`. @@ -61,17 +67,12 @@ This enables writing forward compatible code: by explicitly importing either ten * Add guard to avoid acceleration of L2 Normalization with input rank != 4 * Add `tf.math.cumulative_logsumexp operation`. * Add `tf.ragged.stack`. -* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. -* Refactors code in Quant8 LSTM support to reduce TFLite binary size. * Fix memory allocation problem when calling `AddNewInputConstantTensor`. * Delegate application failure leaves interpreter in valid state. -* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow. * Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`. * Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc * Added support for `FusedBatchNormV3` in converter. * A ragged to dense op for directly calculating tensors. -* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types. -* Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). * Fix accidental quadratic graph construction cost in graph-mode `tf.gradients()`. ## Thanks to our Contributors From e00661134891cafffda9788447f4be29f0ee8710 Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:09:34 -0700 Subject: [PATCH 07/15] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index 6f2b1c6a18c466..101279feb94309 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -14,7 +14,7 @@ This enables writing forward compatible code: by explicitly importing either ten ## Breaking Changes * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. -* TensorFlow 1.15 is built using devtoolset7 on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. +* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. * Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. * `tf.keras`: * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. From 70a0177b7a3cb17c5a037d5e1f0ec656de01f72d Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:32:23 -0700 Subject: [PATCH 08/15] Update RELEASE.md --- RELEASE.md | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 101279feb94309..7d7868c7597715 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -2,25 +2,25 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ## Major Features and Improvements -* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function. -This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. +* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its `compat.v2 module`. It contains a copy of the 1.15 main module (without `contrib`) in the `compat.v1 module`. TensorFlow 1.15 is able to emulate 2.0 behavior using the `enable_v2_behavior()` function. +This enables writing forward compatible code: by explicitly importing either `tensorflow.compat.v1` or `tensorflow.compat.v2`, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. * EagerTensor now supports buffer interface for tensors. * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. -* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs. +* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIS. * Adds enable_tensor_equality(), which switches the behavior such that: - Tensors are no longer hashable - Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0 + * Tensors are no longer hashable. + * Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0. ## Breaking Changes * Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. -* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. +* TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. * Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. * `tf.keras`: * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs. * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. - * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. + * Layers now default to `float32`, and automatically cast their inputs to the layer's dtype. If you had a model that used `float64`, it will probably silently use `float32` in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). ## Bug Fixes and Other Changes @@ -47,9 +47,9 @@ This enables writing forward compatible code: by explicitly importing either ten * Added new op: `tf.strings.unsorted_segment_join`. * Add HW acceleration support for `topK_v2`. * Add new `TypeSpec` classes. -* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0 +* CloudBigtable version updated to v0.10.0. * Expose `Head` as public API. -* Update docstring for gather to properly describe the non-empty batch_dims case. +* Update docstring for gather to properly describe the non-empty `batch_dims` case. * Added `tf.sparse.from_dense` utility function. * Improved ragged tensor support in `TensorFlowTestCase`. * Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not. @@ -60,10 +60,8 @@ This enables writing forward compatible code: by explicitly importing either ten * `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. * Refactors code in Quant8 LSTM support to reduce TFLite binary size. * Add support of local soft device placement for eager op. -* Pass partial_pivoting to the `_TridiagonalSolveGrad`. * Add HW acceleration support for `LogSoftMax`. -* Added a function nested_value_rowids for ragged tensors. -* fixed a bug in histogram_op.cc. +* Added a function `nested_value_rowids` for ragged tensors. * Add guard to avoid acceleration of L2 Normalization with input rank != 4 * Add `tf.math.cumulative_logsumexp operation`. * Add `tf.ragged.stack`. From 33810eedb854cb14650806bc1bd0c8f3f34dbaac Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:34:41 -0700 Subject: [PATCH 09/15] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index 7d7868c7597715..a829f73c7acae0 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -4,7 +4,7 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x ## Major Features and Improvements * TensorFlow 1.15 contains a complete implementation of the 2.0 API in its `compat.v2 module`. It contains a copy of the 1.15 main module (without `contrib`) in the `compat.v1 module`. TensorFlow 1.15 is able to emulate 2.0 behavior using the `enable_v2_behavior()` function. This enables writing forward compatible code: by explicitly importing either `tensorflow.compat.v1` or `tensorflow.compat.v2`, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. -* EagerTensor now supports buffer interface for tensors. +* EagerTensor now supports numpy buffer interface for tensors. * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIS. From 4da95958a1e3ec8a87b1f7308bfb3ace6d60f2c5 Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:35:50 -0700 Subject: [PATCH 10/15] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index a829f73c7acae0..a449a47bdb1e38 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -8,7 +8,7 @@ This enables writing forward compatible code: by explicitly importing either `te * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. * Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`. * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIS. -* Adds enable_tensor_equality(), which switches the behavior such that: +* Adds `enable_tensor_equality()`, which switches the behavior such that: * Tensors are no longer hashable. * Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0. From e46dd501ed9d1d47a6cc77d49558503aff4d0799 Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:36:46 -0700 Subject: [PATCH 11/15] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index a449a47bdb1e38..dfc8d59b7a76f6 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -21,7 +21,7 @@ This enables writing forward compatible code: by explicitly importing either `te * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed. * Layers now default to `float32`, and automatically cast their inputs to the layer's dtype. If you had a model that used `float64`, it will probably silently use `float32` in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information. - * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). + * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to `session.run()`, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). ## Bug Fixes and Other Changes * `tf.data`: From e0a34456e28e31095a6a350cee275a97f68749e6 Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Fri, 6 Sep 2019 14:37:32 -0700 Subject: [PATCH 12/15] Update RELEASE.md --- RELEASE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/RELEASE.md b/RELEASE.md index dfc8d59b7a76f6..7eb85212ebbc73 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -37,7 +37,7 @@ This enables writing forward compatible code: by explicitly importing either `te * `tf.lite` * Add `GATHER` support to NN API delegate. * tflite object detection script has a debug mode. - * Add delegate support for QUANTIZE. + * Add delegate support for `QUANTIZE`. * Added evaluation script for COCO minival. * Add delegate support for `QUANTIZED_16BIT_LSTM`. * Converts hardswish subgraphs into atomic ops. From 9b3f5f76214027ab9484939a9595d30ed9afb5dd Mon Sep 17 00:00:00 2001 From: Yifei Feng <1192265+yifeif@users.noreply.github.com> Date: Fri, 6 Sep 2019 15:32:52 -0700 Subject: [PATCH 13/15] Add pip consolidation to 1.15.0 release note. --- RELEASE.md | 1 + 1 file changed, 1 insertion(+) diff --git a/RELEASE.md b/RELEASE.md index 7eb85212ebbc73..b61196126fad24 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -2,6 +2,7 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ## Major Features and Improvements +* As [announced](https://groups.google.com/a/tensorflow.org/forum/#!topic/developers/iRCt5m4qUz0), `tensorflow` pip package will by default include GPU support (same as `tensorflow-gpu` now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs. `tensorflow-gpu` will still be available, and CPU-only packages can be downloaded at `tensorflow-cpu` for users who are concerned about package size. * TensorFlow 1.15 contains a complete implementation of the 2.0 API in its `compat.v2 module`. It contains a copy of the 1.15 main module (without `contrib`) in the `compat.v1 module`. TensorFlow 1.15 is able to emulate 2.0 behavior using the `enable_v2_behavior()` function. This enables writing forward compatible code: by explicitly importing either `tensorflow.compat.v1` or `tensorflow.compat.v2`, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. * EagerTensor now supports numpy buffer interface for tensors. From b511266506ec0b0aff0e01a3bd4ac3b8500f192f Mon Sep 17 00:00:00 2001 From: Mihai Maruseac Date: Tue, 10 Sep 2019 13:21:28 -0700 Subject: [PATCH 14/15] Add `tf.estimator` release notes At one point in the future we might want to not add those notes into TF's ones to properly separate the release cycles of the two projects. But for now they are still entangled --- RELEASE.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/RELEASE.md b/RELEASE.md index b61196126fad24..914ff15e364790 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -25,6 +25,11 @@ This enables writing forward compatible code: by explicitly importing either `te * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to `session.run()`, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). ## Bug Fixes and Other Changes +* `tf.estimator`: + * `tf.keras.estimator.model_to_estimator` now supports exporting to `tf.train.Checkpoint` format, which allows the saved checkpoints to be compatible with `model.load_weights`. + * Fix tests in canned estimators. + * Expose Head as public API. + * Fixes critical bugs that help with `DenseFeatures` usability in TF2 * `tf.data`: * Promoting `unbatch` from experimental to core API. * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets. From b27ac431aa37cfeb9d5c35cc50081cdb6763a40e Mon Sep 17 00:00:00 2001 From: Goldie Gadde Date: Mon, 14 Oct 2019 13:00:19 -0700 Subject: [PATCH 15/15] Update RELEASE.md --- RELEASE.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 914ff15e364790..24b99c2f35445e 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -3,7 +3,7 @@ This is the last 1.x release for TensorFlow. We do not expect to update the 1.x ## Major Features and Improvements * As [announced](https://groups.google.com/a/tensorflow.org/forum/#!topic/developers/iRCt5m4qUz0), `tensorflow` pip package will by default include GPU support (same as `tensorflow-gpu` now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs. `tensorflow-gpu` will still be available, and CPU-only packages can be downloaded at `tensorflow-cpu` for users who are concerned about package size. -* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its `compat.v2 module`. It contains a copy of the 1.15 main module (without `contrib`) in the `compat.v1 module`. TensorFlow 1.15 is able to emulate 2.0 behavior using the `enable_v2_behavior()` function. +* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its `compat.v2` module. It contains a copy of the 1.15 main module (without `contrib`) in the `compat.v1` module. TensorFlow 1.15 is able to emulate 2.0 behavior using the `enable_v2_behavior()` function. This enables writing forward compatible code: by explicitly importing either `tensorflow.compat.v1` or `tensorflow.compat.v2`, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. * EagerTensor now supports numpy buffer interface for tensors. * Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow. @@ -11,10 +11,10 @@ This enables writing forward compatible code: by explicitly importing either `te * AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIS. * Adds `enable_tensor_equality()`, which switches the behavior such that: * Tensors are no longer hashable. - * Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0. + * Tensors can be compared with `==` and `!=`, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0. ## Breaking Changes -* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. +* Tensorflow code now produces 2 different pip packages: `tensorflow_core` containing all the code (in the future it will contain only the private implementation) and `tensorflow` which is a virtual pip package doing forwarding to `tensorflow_core` (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. * TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow. * Deprecated the use of `constraint=` and `.constraint` with ResourceVariable. * `tf.keras`: