Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add pnp domain with one op: LinalgSVD #5821

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
42 changes: 42 additions & 0 deletions docs/Changelog.md
Expand Up @@ -24693,6 +24693,48 @@ This version of the operator has been available since version 21 of the default
<dd>The type of the input 'y_zero_point' and the output 'y'.</dd>
</dl>

# ai.onnx.pnp
## Version 1 of the 'ai.onnx.pnp' operator set
### <a name="ai.onnx.pnp.LinalgSVD-1"></a>**ai.onnx.pnp.LinalgSVD-1**</a>

For internal use.

#### Version

This version of the operator has been available since version 1 of the 'ai.onnx.pnp' operator set.

#### Attributes

<dl>
<dt><tt>full_matrices</tt> : int (default is 1)</dt>
<dd></dd>
</dl>

#### Inputs

<dl>
<dt><tt>A</tt> : T</dt>
<dd></dd>
</dl>

#### Outputs

<dl>
<dt><tt>U</tt> : T</dt>
<dd></dd>
<dt><tt>S</tt> : T</dt>
<dd></dd>
<dt><tt>Vh</tt> : T</dt>
<dd></dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T</tt> : tensor(float), tensor(double)</dt>
<dd></dd>
</dl>

# ai.onnx.preview.training
## Version 1 of the 'ai.onnx.preview.training' operator set
### <a name="ai.onnx.preview.training.Adagrad-1"></a>**ai.onnx.preview.training.Adagrad-1**</a>
Expand Down
81 changes: 81 additions & 0 deletions docs/Operators.md
Expand Up @@ -206,6 +206,11 @@ For an operator input/output's differentiability, it can be differentiable,
|<a href="#Softsign">Softsign</a>|<a href="Changelog.md#Softsign-1">1</a>|18|
|<a href="#ThresholdedRelu">ThresholdedRelu</a>|<a href="Changelog.md#ThresholdedRelu-10">10</a>|18|

### ai.onnx.pnp
|**Operator**|**Since version**||
|-|-|-|
|<a href="#ai.onnx.pnp.LinalgSVD">ai.onnx.pnp.LinalgSVD</a>|<a href="Changelog.md#ai.onnx.pnp.LinalgSVD-1">1</a>|

### ai.onnx.preview.training
|**Operator**|**Since version**||
|-|-|-|
Expand Down Expand Up @@ -34682,6 +34687,82 @@ expect(node, inputs=[x, y], outputs=[z], name="test_xor_bcast4v4d")
</details>


## ai.onnx.pnp
### <a name="ai.onnx.pnp.LinalgSVD"></a><a name="ai.onnx.pnp.linalgsvd">**ai.onnx.pnp.LinalgSVD**</a>

For internal use.

#### Version

This version of the operator has been available since version 1 of the 'ai.onnx.pnp' operator set.

#### Attributes

<dl>
<dt><tt>full_matrices</tt> : int (default is 1)</dt>
<dd></dd>
</dl>

#### Inputs

<dl>
<dt><tt>A</tt> : T</dt>
<dd></dd>
</dl>

#### Outputs

<dl>
<dt><tt>U</tt> : T</dt>
<dd></dd>
<dt><tt>S</tt> : T</dt>
<dd></dd>
<dt><tt>Vh</tt> : T</dt>
<dd></dd>
</dl>

#### Type Constraints

<dl>
<dt><tt>T</tt> : tensor(float), tensor(double)</dt>
<dd></dd>
</dl>


#### Examples

<details>
<summary>linalgsvd</summary>

```python
threshold = 1.0
node = onnx.helper.make_node(
"LinalgSVD",
inputs=["A"],
outputs=["U", "S", "Vh"],
threshold=threshold,
domain="ai.onnx.pnp",
)

A = np.array([
[
[-1.125840, -1.152360, -0.250579, -0.433879],
[0.848710, 0.692009, -0.316013, -2.115219],
[0.468096, -0.157712, 1.443660, 0.266049],
],
[
[0.166455, 0.874382, -0.143474, -0.111609],
[0.931827, 1.259009, 2.004981, 0.053737],
[0.618057, -0.412802, -0.841065, -2.316042]
]
])
U, S, Vh = np.linalg.svd(A, full_matrices=True)
expect(node, inputs=[A], outputs=[U, S, Vh], name="test_ai_onnx_pnp_linalg_svd")
```

</details>


## ai.onnx.preview.training
### <a name="ai.onnx.preview.training.Adagrad"></a><a name="ai.onnx.preview.training.adagrad">**ai.onnx.preview.training.Adagrad**</a>

Expand Down
36 changes: 35 additions & 1 deletion docs/TestCoverage.md
Expand Up @@ -6,7 +6,7 @@
* [Overall Test Coverage](#overall-test-coverage)
# Node Test Coverage
## Summary
Node tests have covered 179/192 (93.23%, 5 generators excluded) common operators.
Node tests have covered 180/193 (93.26%, 5 generators excluded) common operators.

Node tests have covered 0/0 (N/A) experimental operators.

Expand Down Expand Up @@ -9109,6 +9109,40 @@ expect(node, inputs=[x, y], outputs=[z], name="test_less_equal_bcast")
</details>


### LinalgSVD
There are 1 test cases, listed as following:
<details>
<summary>linalgsvd</summary>

```python
threshold = 1.0
node = onnx.helper.make_node(
"LinalgSVD",
inputs=["A"],
outputs=["U", "S", "Vh"],
threshold=threshold,
domain="ai.onnx.pnp",
)

A = np.array([
[
[-1.125840, -1.152360, -0.250579, -0.433879],
[0.848710, 0.692009, -0.316013, -2.115219],
[0.468096, -0.157712, 1.443660, 0.266049],
],
[
[0.166455, 0.874382, -0.143474, -0.111609],
[0.931827, 1.259009, 2.004981, 0.053737],
[0.618057, -0.412802, -0.841065, -2.316042]
]
])
U, S, Vh = np.linalg.svd(A, full_matrices=True)
expect(node, inputs=[A], outputs=[U, S, Vh], name="test_ai_onnx_pnp_linalg_svd")
```

</details>


### Log
There are 1 test cases, listed as following:
<details>
Expand Down
Empty file.
37 changes: 37 additions & 0 deletions onnx/backend/test/case/node/ai_onnx_pnp/linalg_svd.py
@@ -0,0 +1,37 @@
# Copyright (c) ONNX Project Contributors

Check warning

Code scanning / lintrunner

RUFF/format Warning

Run lintrunner -a to apply this patch.

Check warning

Code scanning / lintrunner

BLACK-ISORT/format Warning

Run lintrunner -a to apply this patch.

# SPDX-License-Identifier: Apache-2.0

import numpy as np

import onnx
from onnx.backend.test.case.base import Base
from onnx.backend.test.case.node import expect


class LinalgSVD(Base):
@staticmethod
def export() -> None:
threshold = 1.0
node = onnx.helper.make_node(
"LinalgSVD",
inputs=["A"],
outputs=["U", "S", "Vh"],
threshold=threshold,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is threshold?

domain="ai.onnx.pnp",
)

A = np.array([
[
[-1.125840, -1.152360, -0.250579, -0.433879],
[0.848710, 0.692009, -0.316013, -2.115219],
[0.468096, -0.157712, 1.443660, 0.266049],
],
[
[0.166455, 0.874382, -0.143474, -0.111609],
[0.931827, 1.259009, 2.004981, 0.053737],
[0.618057, -0.412802, -0.841065, -2.316042]
]
])
U, S, Vh = np.linalg.svd(A, full_matrices=True)

Check warning

Code scanning / lintrunner

EDITORCONFIG-CHECKER/editorconfig Warning

Trailing whitespace

Check warning

Code scanning / lintrunner

RUFF/W291 Warning

expect(node, inputs=[A], outputs=[U, S, Vh], name="test_ai_onnx_pnp_linalg_svd")
Binary file not shown.
Binary file not shown.
@@ -0,0 +1 @@
 BUJ=ƒ!OjÈ?>Â¥`ç½è¿ÂÙN¸®Zã?ç:2Zñï¿ÀIvDX<«¿Ä<(P4rÎ?P[ü/lÃ?Øsï´î8ä?+11Rè?ßðÖ´¿RFf6@Ç?b›}^ï?§ÆyU½xæ¿.³©{áæ?¶*Ý¡DÇ¿©äWÛÀ¤æ?#íÍ÷|æ?Š¸„~Ù²¿
Expand Down
Binary file not shown.
Binary file not shown.
2 changes: 1 addition & 1 deletion onnx/checker.cc
Expand Up @@ -656,7 +656,7 @@ void check_node(const NodeProto& node, const CheckerContext& ctx, const LexicalS
const auto* schema = ctx.get_schema_registry()->GetSchema(node.op_type(), domain_version, node.domain());
if (!schema) {
if (node.domain() == ONNX_DOMAIN || node.domain() == AI_ONNX_ML_DOMAIN || node.domain() == "ai.onnx" ||
node.domain() == AI_ONNX_TRAINING_DOMAIN) {
node.domain() == AI_ONNX_TRAINING_DOMAIN || node.domain() == AI_ONNX_PNP_DOMAIN) {
// fail the checker if op in built-in domains has no schema
fail_check(
"No Op registered for " + node.op_type() + " with domain_version of " +
Expand Down
1 change: 1 addition & 0 deletions onnx/common/constants.h
Expand Up @@ -13,6 +13,7 @@ namespace ONNX_NAMESPACE {
constexpr const char* AI_ONNX_ML_DOMAIN = "ai.onnx.ml";
constexpr const char* AI_ONNX_TRAINING_DOMAIN = "ai.onnx.training";
constexpr const char* AI_ONNX_PREVIEW_TRAINING_DOMAIN = "ai.onnx.preview.training";
constexpr const char* AI_ONNX_PNP_DOMAIN = "ai.onnx.pnp";
// The following two are equivalent in an onnx proto representation.
constexpr const char* ONNX_DOMAIN = "";
constexpr const char* AI_ONNX_DOMAIN = "ai.onnx";
Expand Down
2 changes: 2 additions & 0 deletions onnx/defs/__init__.py
Expand Up @@ -7,6 +7,7 @@
"ONNX_DOMAIN",
"ONNX_ML_DOMAIN",
"AI_ONNX_PREVIEW_TRAINING_DOMAIN",
"AI_ONNX_PNP_DOMAIN",
"has",
"get_schema",
"get_all_schemas",
Expand All @@ -25,6 +26,7 @@
ONNX_DOMAIN = ""
ONNX_ML_DOMAIN = "ai.onnx.ml"
AI_ONNX_PREVIEW_TRAINING_DOMAIN = "ai.onnx.preview.training"
AI_ONNX_PNP_DOMAIN = "ai.onnx.pnp"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just thinking aloud: I wonder if "ai.onnx.linalg" would be better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think ai.onnx.linalg is a clearer domain name



has = C.has_schema
Expand Down
31 changes: 31 additions & 0 deletions onnx/defs/operator_sets_pnp.h
@@ -0,0 +1,31 @@
/*
* SPDX-License-Identifier: Apache-2.0
*/

#pragma once

#include "onnx/defs/schema.h"

namespace ONNX_NAMESPACE {

// Declare training operators.

class ONNX_PNP_OPERATOR_SET_SCHEMA_CLASS_NAME(1, LinalgSVD);

// Iterate over schema from ai.onnx.training version 1
class OpSet_OnnxPNP_ver1 {
public:
static void ForEachSchema(std::function<void(OpSchema&&)> fn) {

Check warning on line 18 in onnx/defs/operator_sets_pnp.h

View workflow job for this annotation

GitHub Actions / clang-tidy-review

clang-tidy

warning: parameter name 'fn' is too short, expected at least 3 characters [readability-identifier-length] ```cpp static void ForEachSchema(std::function<void(OpSchema&&)> fn) { ^ ```
fn(GetOpSchema<ONNX_PNP_OPERATOR_SET_SCHEMA_CLASS_NAME(1, LinalgSVD)>());
}
};

// Register preview operators.
inline void RegisterOnnxPNPOperatorSetSchema() {
// Preview operators should have only one version.
// If changes are needed for a specific preview operator,
// its spec should be modified without increasing its version.
RegisterOpSetSchema<OpSet_OnnxPNP_ver1>();
}

} // namespace ONNX_NAMESPACE

Check warning on line 31 in onnx/defs/operator_sets_pnp.h

View workflow job for this annotation

GitHub Actions / Optional Lint

[cpplint] reported by reviewdog 🐶 At least two spaces is best between code and comments [whitespace/comments] [2] Raw Output: onnx/defs/operator_sets_pnp.h:31: At least two spaces is best between code and comments [whitespace/comments] [2]
75 changes: 75 additions & 0 deletions onnx/defs/pnp/def.cc
@@ -0,0 +1,75 @@
/*

Check warning

Code scanning / lintrunner

CLANGFORMAT/format Warning

See https://clang.llvm.org/docs/ClangFormat.html.
Run lintrunner -a to apply this patch.

Check warning

Code scanning / lintrunner

EDITORCONFIG-CHECKER/editorconfig Warning

Final newline expected
* SPDX-License-Identifier: Apache-2.0
*/

#include <algorithm>
#include <cmath>
#include <numeric>

#include "onnx/defs/schema.h"

namespace ONNX_NAMESPACE {

ONNX_PNP_OPERATOR_SET_SCHEMA(LinalgSVD, 1,

Check warning on line 13 in onnx/defs/pnp/def.cc

View workflow job for this annotation

GitHub Actions / clang-tidy-review

clang-tidy

warning: conditional operator with identical true and false expressions [bugprone-branch-clone] ```cpp ESPACE { ^ ``` <details> <summary>Additional context</summary> **onnx/defs/schema.h:1381:** expanded from macro 'ONNX_PNP_OPERATOR_SET_SCHEMA' ```cpp ONNX_OPERATOR_SET_SCHEMA_EX(name, OnnxPNP, AI_ONNX_PNP_DOMAIN, ver, true, impl) ^ ``` **onnx/defs/schema.h:1396:** expanded from macro 'ONNX_OPERATOR_SET_SCHEMA_EX' ```cpp (dbg_included_in_static_opset) ? ONNX_DBG_INCREMENT_COUNT_IN_OPSETS() : 0; ^ ``` </details>

Check warning on line 13 in onnx/defs/pnp/def.cc

View workflow job for this annotation

GitHub Actions / clang-tidy-review

clang-tidy

warning: variable 'dbg_count_check_LinalgSVD_OnnxPNP_ver1' is non-const and globally accessible, consider making it const [cppcoreguidelines-avoid-non-const-global-variables] ```cpp ESPACE { ^ ``` expanded from here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could just call it "SVD". Especially if we change the domain to "ai.onnx.linalg" as discussed elsewhere.

OpSchema()
.SetDoc(R"DOC(For internal use.)DOC")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be great to have a longer description. onnx runtimes do not need to declare an operator in onnx to implement a kernel. Is it needed for onnx?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. I don't understand what "internal use" means here.

.Attr(
"full_matrices",
"",
AttributeProto::INT,
static_cast<int64_t>(1))
.Input(
0,
"A",
"",
"T")
.Output(
0,
"U",
"",
"T")
.Output(
1,
"S",
"",
"T")
.Output(
2,
"Vh",
"",
"T")
.TypeConstraint(
"T",
{"tensor(float)", "tensor(double)"},
"")
.TypeAndShapeInferenceFunction([](ONNX_NAMESPACE::InferenceContext& ctx) {
ONNX_NAMESPACE::propagateElemTypeFromInputToOutput(ctx, 0, 0);
ONNX_NAMESPACE::propagateElemTypeFromInputToOutput(ctx, 0, 1);
ONNX_NAMESPACE::propagateElemTypeFromInputToOutput(ctx, 0, 2);
int64_t full_matrices = ctx.getAttribute("full_matrices")->i();

const TensorShapeProto& A_shape = ctx.getInputType(0)->tensor_type().shape();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines should be protected by the usual hasNInputShape or hasInputShape.

const auto& M = A_shape.dim(A_shape.dim_size() - 2);
const auto& N = A_shape.dim(A_shape.dim_size() - 1);
if (!M.has_dim_value() || !N.has_dim_value()) {
// cannot do shape inference without knowing dimension values
return;
}
const auto& K = M.dim_value() < N.dim_value() ? M : N;
auto* u_shape = ctx.getOutputType(0)->mutable_tensor_type()->mutable_shape();
auto* s_shape = ctx.getOutputType(1)->mutable_tensor_type()->mutable_shape();
auto* v_shape = ctx.getOutputType(2)->mutable_tensor_type()->mutable_shape();
if (A_shape.dim_size() == 3) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These shape aspects should be explained in the operator documentation: I guess the input is required to be either 2-dim or 3-dim?

const auto& batch_dim = A_shape.dim(0);
*u_shape->add_dim() = batch_dim;
*s_shape->add_dim() = batch_dim;
*v_shape->add_dim() = batch_dim;
}
*u_shape->add_dim() = M;
*u_shape->add_dim() = full_matrices ? M : K;
*s_shape->add_dim() = K;
*v_shape->add_dim() = full_matrices ? N : K;
*v_shape->add_dim() = N;
}));

} // namespace ONNX_NAMESPACE

Check warning on line 75 in onnx/defs/pnp/def.cc

View workflow job for this annotation

GitHub Actions / Optional Lint

[cpplint] reported by reviewdog 🐶 At least two spaces is best between code and comments [whitespace/comments] [2] Raw Output: onnx/defs/pnp/def.cc:75: At least two spaces is best between code and comments [whitespace/comments] [2]

Check warning on line 75 in onnx/defs/pnp/def.cc

View workflow job for this annotation

GitHub Actions / Optional Lint

[cpplint] reported by reviewdog 🐶 Could not find a newline character at the end of the file. [whitespace/ending_newline] [5] Raw Output: onnx/defs/pnp/def.cc:75: Could not find a newline character at the end of the file. [whitespace/ending_newline] [5]
4 changes: 4 additions & 0 deletions onnx/defs/schema.cc
Expand Up @@ -12,6 +12,7 @@
#include "onnx/defs/operator_sets.h"
#include "onnx/defs/operator_sets_preview.h"
#include "onnx/defs/operator_sets_training.h"
#include "onnx/defs/operator_sets_pnp.h"

#ifdef ONNX_ML
#include "onnx/defs/operator_sets_ml.h"
Expand Down Expand Up @@ -1078,6 +1079,9 @@ OpName_Domain_Version_Schema_Map& OpSchemaRegistry::map() {
// Invoke register of experimental operators.
RegisterOnnxPreviewOperatorSetSchema();

// Invoke register of PNP operators.
RegisterOnnxPNPOperatorSetSchema();

#ifndef NDEBUG
size_t dbg_registered_schema_count = GetRegisteredSchemaCount() - dbg_initial_schema_count;
// Check enabled only if schemas for all opset versions are loaded
Expand Down