Skip to content

Releases: onnx/onnx-tensorflow

v1.10.0

17 Mar 15:41
e2a8a71
Compare
Choose a tag to compare

Major changes and updates since v1.9.0 release:

Support ONNX opset 15

All Opset 15 operators are supported or partially supproted, please refer to support_status_v1_10_0.md for details.

TensorFlow update

Supported TensorFlow version is updated to 2.8.0.

Bug fixes

Several bug fixes are included.

v1.9.0

24 Aug 01:08
72c8144
Compare
Choose a tag to compare

Change Log

Major changes and updates since v1.8.0 release:

Support ONNX opset 14

  • All Opset 14 operators are supported or partially supproted, please refer to support_status_v1_9_0.md for details.

TensorFlow update

  • Supported TensorFlow version is updated to 2.6.0.

Python update

  • Supproted Python version are updated to 3.8 and 3.9.

Support Loading ONNX Model with External Data

Bug fixes

  • Several bug fixes are included.

v1.8.0

13 Apr 21:29
db9a0e8
Compare
Choose a tag to compare

Change Log

Major changes and updates since v1.7.0 release:

Support ONNX opset 13

  • All Opset 13 operators are supported except training ops, please refer to support_status_v1_8_0.md for details.

Model zoo verification

Inference graph based training and examples

Update CI

  • Moved CI from travis to Github actions
  • Added a new automated test for all ONNX model zoo models

Special notes

  • Tensorflow 1.x is no longer supported starting ONNX-TF 1.8.0.

v1.7.0

24 Nov 18:06
a4beb66
Compare
Choose a tag to compare

Change Log

Major changes and updates since v1.6.0 release:

Export model in SavedModel format

  • API: “onnx_tf.backend_rep.TensorflowRep.export_graph” and CLI: “convert” will create a TensorFlow SavedModel for user to deploy it in TensorFlow.

Auto data type cast to support data types that are not natively supported by TensorFlow

  • User can set auto_cast=True in API: “onnx_tf.backend.prepare” or CLI: “convert” to enable this auto_cast feature.

Convert model to run in a GPU or CPU environment base on user input

  • User can set device=’CPU’(default) or device=’CUDA’ in API: “onnx_tf.backend.prepare” or CLI: “convert” to set the model inferencing environment.

Support Opset 12 operators

  • All Opset 12 operators are supported except training ops, please refer to support_status_v1_7_0.md for details.

Create graph using tf.function(recommended in tf-2.x) instead of tf.Graph(deprecated in tf-2.x)

  • Used tf.Module as the base class of the converted model
  • Used tf.function to generate the graph automatically

Define a template to compare inference result with other backend

  • Added a model stepping test for MNIST model to compare inference result with ONNX runtime.

Update CI

  • Migrated travis CI from travis.org to travis.com.
  • Updated CI to skip unsupported operators and allow failure against ONNX latest master branch.

v1.6.0

23 Jul 22:44
c60a32c
Compare
Choose a tag to compare

Change Log

Major changes and updates since v1.5.0 release:

Support Tensorflow 2.x and 1.15

  • For production, please use onnx-tf PyPi package for Tensorflow 2.x conversion and use tag v1.6.0-tf-1.15 to build a package from source for Tensorflow 1.15 conversion.
  • For development, please use the master branch for Tensorflow 2.x conversion, and use tf-1.x branch for Tensorflow 1.15 conversion. 
  • tf-1.x branch is for users who cannot upgrade to Tensorflow 2.x yet. This branch will only support up to ONNX OpSet 12 operators. If any user needs to use operators in OpSet 13 or above, please upgrade Tensorflow to 2.x and use the master branch of this repo. By January 1st, 2021 this branch will switch to maintenance mode only, no new development will be added into this branch from then on.

Support dynamic shape inputs 

  • Added test_dynamic_shape.py to verify the handlers can process dynamic shape inputs.

Support OpSet 11 operators

Python 2.7 support is deprecated

v1.2.0

01 Oct 04:49
Compare
Choose a tag to compare

This is an incremental release.

a575d14 Update VERSION_NUMBER (#263)
3ec6374 Change a debug "error" message to warning message (#260)
08cf38f add ConstantLike (#255)
b2ee580 add Less v9 (#256)
416a16a Frontend model test (#252)
2ae9223 Fix python3 issue and disconnected nodes (#248)
a365d26 Update data_type.py (#245)
8cb530f Change onnx-tf behavior regarding a corner case ONNX spec does not address (#247)
7162525 bias_add boradcasting fix (#243)
16bc0b2 add mvn to backend (#241)
b91cba3 add pack unpack to frontend (#236)
53e3bbb Update data_type.py (#238)
4d3836f add expand support (#234)
e1bca54 support max_pool_with_argmax v8 (#228)
2e1e72a rnn_cell_bw is already an array (#225)
dfc6f01 fix conv transpose backend (#223)
1e3209b support min/max/mean/sum v8 backend (#221)
317d70b fix unsqueeze backend (#220)
fa6d4e6 add yapf to README.md (#218)
aa02ddb add imagescaler support to backend (#215)
1a07332 Fix dropout (#217)
8d19f27 force to use test mode in bn v7 (#216)
ce795a6 add slice support to frontend (#214)
0006132 enable non-strict mode (#212)
af772aa some legacy fix (#210)
486d989 Refactor backend (#205)
1c05197 multi outputs support (#204)
5690957 usr defs.ONNX_DOMAIN instead (#206)
26189e9 unimplemented exception domain support (#198)
d20d3d7 Add instance norm (#202)
b9b8014 fix breakage of backward compatibility (#201)
c5819fa Update batch_normalization.py (#200)
d277098 Update math_mixin.py (#199)
d6ac8b8 add optimizer arg to api (#196)
5a21983 New make node api (#197)
516e986 Support ver 7 (#193)
cd8a48b simplify get_tf_pad and use native pool when count_include_pad is 1 (#186)
4cd611a Fix conv and identity (#195)
de00b46 add cast v6 in frontend (#191)
89db14e Fix lstm and add rnn, gru (#156)
e882298 bug fix and some improvements (#192)
871d867 Refactor frontend (#184)

v1.1.2

22 May 16:54
Compare
Choose a tag to compare

This release is a hot fix for pip installation problem described in #189.

91740b7 include VERSION_NUMBER, ONNX_VERSION_NUMBER in pypi distribution (#189)
cd3768e Add SpaceToDepth in frontend (#183)
02f5001 Handle dim_size==0 in reshape's shape param (#187)
3e63cdc remove maxpool v7 (#182)
2695db6 Support count include pad of pool ops (#172)
3a4deb9 use native pool as much as possible (#171)

v1.1.1

17 May 19:48
2ac0530
Compare
Choose a tag to compare

This release contains an important patch to ensure that an existing ONNX release version can be used with ONNX-TF. Prior to this release, there may be no release version of ONNX that works with ONNX-TF.

2ac0530 Fix opset with type long not recognized as integer in Python 2 issue (#181)
8400226 keep proto when it is an output in graph (#177)
01d4f43 Update README.md (#180)
593851c add shape test case and refactor (#178)
e00b2be remove out_type attr (#176)
5537985 fix value_info incompatibility problem (#179)
10f2f23 add non-master version of onnx to travis (#174)
4ff9572 add a utility for creating bug report (#170)
30a1092 Update issue templates (#169)
121e360 add fill frontend support (#166)
d2ac24d add infer shapes to GraphDef (#162)
288bfb2 add ceil, exp, floor, log, logsoftmax to frontend (#165)