Skip to content

Commit

Permalink
Update RELEASE.md
Browse files Browse the repository at this point in the history
Remove oneDNN CPU feature guard mention
  • Loading branch information
penpornk committed May 9, 2022
1 parent 0cbfd51 commit a09f743
Showing 1 changed file with 11 additions and 11 deletions.
22 changes: 11 additions & 11 deletions RELEASE.md
Expand Up @@ -64,19 +64,19 @@

* `tf.experimental.dtensor`: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under `tf.keras.dtensor` in this release (refer to the `tf.keras` entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.

* [oneDNN CPU custom operations and performance optimizations](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md) are available in Linux x86, Windows x86, and Linux aarch64 packages.
* **Linux and Windows x86 packages:**
* **Linux x86:** oneDNN custom ops are *enabled by default* on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc., which are found on [Intel Cascade Lake](https://www.intel.com/content/www/us/en/products/platforms/details/cascade-lake.html) and newer CPUs.
* [oneDNN CPU performance optimizations](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md) are available in Linux x86, Windows x86, and Linux aarch64 packages.
* **Linux x86 packages:**
* oneDNN optimizations are *enabled by default* on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. ([Intel Cascade Lake](https://www.intel.com/content/www/us/en/products/platforms/details/cascade-lake.html) and newer CPUs.)
* [Example performance speedups.](https://medium.com/intel-analytics-software/leverage-intel-deep-learning-optimizations-in-tensorflow-129faa80ee07)
* For older CPUs, oneDNN custom ops are disabled by default.
* **Windows x86:** oneDNN custom ops are disabled by default.
* These custom ops can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
* For older CPUs, oneDNN optimizations are disabled by default.
* These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
* **Windows x86 package:** oneDNN optimizations are disabled by default.
* **Linux aach64 (`--config=mkl_aarch64`) package:**
* Experimental oneDNN custom ops are disabled by default.
* If you experience issues with oneDNN custom ops on, we recommend turning them off.
* To explicitly enable or disable oneDNN custom ops and optimizations, set the environment variable `TF_ENABLE_ONEDNN_OPTS` to `1` (enable) or `0` (disable) before running TensorFlow. (The variable is checked during `import tensorflow`.) To fall back to default settings, unset the environment variable.
* To verify that the custom ops are on, look for a message with *"oneDNN custom operations are on"* in the log. If the message is not there, it means they are off.
* This is not to be confused with the log message beginning with *"This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)..."*. The message can be printed regardless of whether oneDNN custom ops are enabled, because TensorFlow Linux x86's default matrix multiplication and convolution ops also call oneDNN matrix multiplication routines as a basic building block. (See Figure 2 of [TensorFlow RFC #400](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md).)
* Experimental oneDNN optimizations are disabled by default.
* If you experience issues with oneDNN optimizations on, we recommend turning them off.
* To explicitly enable or disable oneDNN optimizations, set the environment variable `TF_ENABLE_ONEDNN_OPTS` to `1` (enable) or `0` (disable) before running TensorFlow. (The variable is checked during `import tensorflow`.) To fall back to default settings, unset the environment variable.
* To verify that the optimizations are on, look for a message with *"oneDNN custom operations are on"* in the log. If the exact phrase is not there, it means they are off.


# Bug Fixes and Other Changes

Expand Down

0 comments on commit a09f743

Please sign in to comment.