Skip to content

Commit

Permalink
Liqun/fasttrackmain 9.26 (#5625)
Browse files Browse the repository at this point in the history
### Description
fasttrack to current main as of 9.26 to get all PRs for 1.15.0 release

### Motivation and Context
there are many PRs in main branch that are need for 1.15.0 release so
just to a fast track merge

---------

Signed-off-by: Swopper050 <bram@witoos.nl>
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchu@microsoft.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Chun-Wei Chen <jacky82226@gmail.com>
Signed-off-by: Liqun Fu <liqfu@microsoft.com>
Signed-off-by: Ganesan Ramalingam <grama@microsoft.com>
Signed-off-by: liqun Fu <liqun.fu@microsoft.com>
Co-authored-by: Bram <b.dewit@applyai.nl>
Co-authored-by: Swopper050 <bram@witoos.nl>
Co-authored-by: jcwchen <jcwchen@users.noreply.github.com>
Co-authored-by: Chun-Wei Chen <jacky82226@gmail.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Co-authored-by: G. Ramalingam <grama@microsoft.com>
  • Loading branch information
7 people committed Sep 27, 2023
1 parent 14303de commit 0c29608
Show file tree
Hide file tree
Showing 74 changed files with 663 additions and 505 deletions.
10 changes: 8 additions & 2 deletions .azure-pipelines/MacOS-CI.yml
Expand Up @@ -82,7 +82,7 @@ jobs:
python -m pip install onnxruntime
export ORT_MAX_IR_SUPPORTED_VERSION=8
export ORT_MAX_ML_OPSET_SUPPORTED_VERSION=3
export ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=18
export ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=19
pytest
if [ $? -ne 0 ]; then
echo "pytest failed when testing onnx with onnxruntime"
Expand Down Expand Up @@ -118,5 +118,11 @@ jobs:
echo "git diff for test generation without pillow returned failures. Please check updated node test files"
exit 1
fi
# Internal Protobuf won't have other untrack files like protobuf/
if [ '$(protobuf_type)' == 'Internal' ]; then
if [[ $(git ls-files --others --exclude-standard) ]]; then
echo "Some test-generated files not included in the PR. Did you forget to add any test-generated files?"
exit 1
fi
fi
displayName: 'Run ONNX Tests'
2 changes: 1 addition & 1 deletion .github/workflows/release_win.yml
Expand Up @@ -72,7 +72,7 @@ jobs:
if ('${{ github.event_name }}' -eq 'schedule') {
echo "Build weekly PyPI package"
(Get-Content -Path 'pyproject.toml') | ForEach-Object { $_ -replace 'name = "onnx"', 'name = "onnx-weekly"' } | Set-Content -Path 'pyproject.toml'
set ONNX_PREVIEW_BUILD=1
$Env:ONNX_PREVIEW_BUILD=1
}
python -m build --wheel
Get-ChildItem -Path dist/*.whl | foreach {python -m pip install --upgrade $_.fullname}
Expand Down
5 changes: 4 additions & 1 deletion docs/AddNewOp.md
Expand Up @@ -74,7 +74,10 @@ Once the criteria of proposing new operator/function has been satisfied, you wil
1. The testing examples will be extracted to the doc.
2. We also generate binary data for it.
3. Example: [onnx/backend/test/case/node/abs.py](/onnx/backend/test/case/node/abs.py)
5. Add at least one automatic upgrade test for your operator in [onnx/test/automatic_upgrade_test.py](/onnx/test/automatic_upgrade_test.py) using `_test_op_upgrade`. These tests create a given operator at a given opset version (usually the version the operator was introduced in) and test that the version converter is able to convert them to the highest available version. So for a new operator `_test_op_upgrade` will not test anything, but as soon as the operator gets updated in a future opset the test will automatically become nontrivial.
5. Write upgrade and downgrade tests:
1. Add at least one automatic upgrade test for your operator in [onnx/test/automatic_upgrade_test.py](/onnx/test/automatic_upgrade_test.py) using `_test_op_upgrade`. These tests create a given operator at a given opset version (usually the version the operator was introduced in) and test that the version converter is able to convert them to the highest available version. So for a new operator `_test_op_upgrade` will not test anything, but as soon as the operator gets updated in a future opset the test will automatically become nontrivial.
2. Similarly add at least one automatic downgrade test for your operator in [onnx/test/automatic_downgrade_test.py](/onnx/test/automatic_downgrade_test.py) using `_test_op_downgrade`. Specifying the current version so that once the op is updated at a higher opset version the test will ensure downward conversion is validated.

6. Update the documentation and generate the test data.
1. Running [the script](/tools/update_doc.sh). If you have files under `onnx/backend/test/data/node` which cannot be generated by the scripts from `onnx/backend/test/case/node`, please further use `python onnx/backend/test/cmd_tools.py generate-data --clean` to cleanup the directory and only preserve needed test data.
to update the doc and generate the test data.
Expand Down
14 changes: 7 additions & 7 deletions docs/Changelog.md
Expand Up @@ -1579,7 +1579,7 @@ This version of the operator has been available since version 1 of the default O

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down Expand Up @@ -10985,10 +10985,10 @@ This version of the operator has been available since version 11 of the default
Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.
Scale is calculated as:
```
y_scale = (max(x) - min(x))/(qmax - qmin)
y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)
```

* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8
* where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8
* data range is adjusted to include 0.

Zero point is calculated as:
Expand Down Expand Up @@ -11525,7 +11525,7 @@ This version of the operator has been available since version 11 of the default

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down Expand Up @@ -16162,7 +16162,7 @@ This version of the operator has been available since version 13 of the default

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down Expand Up @@ -20057,7 +20057,7 @@ This version of the operator has been available since version 16 of the default

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down Expand Up @@ -23069,7 +23069,7 @@ This version of the operator has been available since version 19 of the default

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down
6 changes: 3 additions & 3 deletions docs/Operators.md
Expand Up @@ -7977,10 +7977,10 @@ expect(
Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.
Scale is calculated as:
```
y_scale = (max(x) - min(x))/(qmax - qmin)
y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)
```

* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8
* where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8
* data range is adjusted to include 0.

Zero point is calculated as:
Expand Down Expand Up @@ -11905,7 +11905,7 @@ Other versions of this operator: <a href="Changelog.md#If-1">1</a>, <a href="Cha

<dl>
<dt><tt>cond</tt> : B</dt>
<dd>Condition for the if</dd>
<dd>Condition for the if. The tensor must contain a single element.</dd>
</dl>

#### Outputs (1 - &#8734;)
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion onnx/defs/controlflow/defs.cc
Expand Up @@ -26,7 +26,7 @@ ONNX_OPERATOR_SET_SCHEMA(
19,
OpSchema()
.SetDoc("If conditional")
.Input(0, "cond", "Condition for the if", "B")
.Input(0, "cond", "Condition for the if. The tensor must contain a single element.", "B")
.Output(
0,
"outputs",
Expand Down
8 changes: 4 additions & 4 deletions onnx/defs/controlflow/old.cc
Expand Up @@ -25,7 +25,7 @@ ONNX_OPERATOR_SET_SCHEMA(
16,
OpSchema()
.SetDoc("If conditional")
.Input(0, "cond", "Condition for the if", "B")
.Input(0, "cond", "Condition for the if. The tensor must contain a single element.", "B")
.Output(
0,
"outputs",
Expand Down Expand Up @@ -1747,7 +1747,7 @@ ONNX_OPERATOR_SET_SCHEMA(
1,
OpSchema()
.SetDoc("If conditional")
.Input(0, "cond", "Condition for the if", "B")
.Input(0, "cond", "Condition for the if. The tensor must contain a single element.", "B")
.Output(
0,
"outputs",
Expand Down Expand Up @@ -1841,7 +1841,7 @@ ONNX_OPERATOR_SET_SCHEMA(
11,
OpSchema()
.SetDoc("If conditional")
.Input(0, "cond", "Condition for the if", "B")
.Input(0, "cond", "Condition for the if. The tensor must contain a single element.", "B")
.Output(
0,
"outputs",
Expand Down Expand Up @@ -1933,7 +1933,7 @@ ONNX_OPERATOR_SET_SCHEMA(
13,
OpSchema()
.SetDoc("If conditional")
.Input(0, "cond", "Condition for the if", "B")
.Input(0, "cond", "Condition for the if. The tensor must contain a single element.", "B")
.Output(
0,
"outputs",
Expand Down
111 changes: 2 additions & 109 deletions onnx/defs/generator/defs.cc
Expand Up @@ -6,6 +6,7 @@
#include <cmath>

#include "onnx/defs/function.h"
#include "onnx/defs/generator/utils.h"
#include "onnx/defs/schema.h"

namespace ONNX_NAMESPACE {
Expand Down Expand Up @@ -57,115 +58,7 @@ ONNX_OPERATOR_SET_SCHEMA(
false)
.Output(0, "output", "Output tensor containing the same value of the provided tensor.", "T")
.TypeConstraint("T", OpSchema::all_tensor_types_ir9(), "Constrain input and output types to all tensor types.")
.TypeAndShapeInferenceFunction([](InferenceContext& ctx) {
auto* value = ctx.getAttribute("value");
auto* sparse_value = ctx.getAttribute("sparse_value");
auto* value_int = ctx.getAttribute("value_int");
auto* value_ints = ctx.getAttribute("value_ints");
auto* value_float = ctx.getAttribute("value_float");
auto* value_floats = ctx.getAttribute("value_floats");
auto* value_string = ctx.getAttribute("value_string");
auto* value_strings = ctx.getAttribute("value_strings");

std::vector<bool> non_null_attr = {
(nullptr != value),
(nullptr != sparse_value),
(nullptr != value_int),
(nullptr != value_ints),
(nullptr != value_float),
(nullptr != value_floats),
(nullptr != value_string),
(nullptr != value_strings)};
if (std::count(non_null_attr.begin(), non_null_attr.end(), true) != 1) {
fail_shape_inference(
"One and only one of the attributes 'value', 'value_*' or 'sparse_value' must be specified for a Constant node.");
}

if (nullptr != value) {
// OpSchema::Verify check ensures that the attribute value has_t():
const TensorProto& tensor_proto = value->t();
updateOutputElemType(ctx, 0, tensor_proto.data_type());
updateOutputShape(ctx, 0, tensor_proto);
return;
}

if (nullptr != value_int) {
// OpSchema::Verify check ensures that the attribute value has_i():
if (!value_int->has_i()) {
fail_shape_inference("Attribute 'value_int' expect an integer.")
}
updateOutputElemType(ctx, 0, TensorProto::INT64);
updateOutputShape(ctx, 0, TensorShapeProto());
return;
}

if (nullptr != value_ints) {
// OpSchema::Verify check ensures that the attribute value has ints.
if (value_ints->ints_size() < 1) {
fail_shape_inference("Attribute 'value_ints' expect a list of integers.");
}
updateOutputElemType(ctx, 0, TensorProto::INT64);
appendDim(getOutputShape(ctx, 0), value_ints->ints_size());
return;
}

if (nullptr != value_float) {
// OpSchema::Verify check ensures that the attribute value has_i():
if (!value_float->has_f()) {
fail_shape_inference("Attribute 'value_float' expect a float.");
}
updateOutputElemType(ctx, 0, TensorProto::FLOAT);
updateOutputShape(ctx, 0, TensorShapeProto());
return;
}

if (nullptr != value_floats) {
// OpSchema::Verify check ensures that the attribute value has ints.
if (value_floats->floats_size() < 1) {
fail_shape_inference("Attribute 'value_floats' expect a list of floats.");
}
updateOutputElemType(ctx, 0, TensorProto::FLOAT);
appendDim(getOutputShape(ctx, 0), value_floats->floats_size());
return;
}

if (nullptr != value_string) {
// OpSchema::Verify check ensures that the attribute value has_i():
if (!value_string->has_s()) {
fail_shape_inference("Attribute 'value_string' expect a string.");
}
updateOutputElemType(ctx, 0, TensorProto::STRING);
updateOutputShape(ctx, 0, TensorShapeProto());
return;
}

if (nullptr != value_strings) {
// OpSchema::Verify check ensures that the attribute value has ints.
if (value_strings->strings_size() < 1) {
fail_shape_inference("Attribute 'value_strings' expect a list of strings.");
}
updateOutputElemType(ctx, 0, TensorProto::STRING);
appendDim(getOutputShape(ctx, 0), value_strings->strings_size());
return;
}

if (nullptr != sparse_value) {
// OpSchema::Verify check ensures that the attribute value
// has_sparse_tensor():
const SparseTensorProto& sparse = sparse_value->sparse_tensor();
// checker.cc::check_sparse_tensor checks that the sparse-value is
// well-formed
updateOutputElemType(ctx, 0, sparse.values().data_type());
auto* output_shape = getOutputShape(ctx, 0);
for (int i = 0; i < sparse.dims_size(); ++i)
appendDim(output_shape, sparse.dims(i));
return;
}

fail_shape_inference(
"TypeAndShapeInferenceFunction implementation incomplete: "
"this line should never be reached.");
}));
.TypeAndShapeInferenceFunction(ConstantOpInference));

static const char* ConstantOfShape_ver20_doc = R"DOC(
Generate a tensor with given value and shape.
Expand Down

0 comments on commit 0c29608

Please sign in to comment.