Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent results for wasi-nn different backends #8391

Open
jianjunz opened this issue Apr 17, 2024 · 4 comments · May be fixed by #8442
Open

Inconsistent results for wasi-nn different backends #8391

jianjunz opened this issue Apr 17, 2024 · 4 comments · May be fixed by #8442
Labels
bug Incorrect behavior in the current implementation that needs fixing

Comments

@jianjunz
Copy link
Contributor

Test Case

nn_image_classification, nn_image_classification_named and nn_image_classification_onnx have different inference results. The first two cases are based on openvino backend, while the third one is based on onnxruntime backend.

Steps to Reproduce

Run the cases above locally or check the output of GitHub Actions.

An example output:
https://github.com/bytecodealliance/wasmtime/actions/runs/8716252474/job/23909494323#step:17:3914

Expected Results

Since both of them use mobilenet v2 and the same input tensor data, they should have similar results. We don't expect them to be exactly the same because of different model formats.

Actual Results

The result of wasi-nn openvino backend is [InferenceResult(963, 0.7113049), InferenceResult(762, 0.07070768), InferenceResult(909, 0.036356032), InferenceResult(926, 0.015456118), InferenceResult(567, 0.015344023)].

The result for onnx backend is [InferenceResult(470, 479.08182), InferenceResult(862, 378.7252), InferenceResult(626, 364.8759), InferenceResult(644, 334.28488), InferenceResult(556, 288.65884)].

CI for WinML backend is not enabled yet, but it has the same result as onnx backend.

Versions and Environment

Wasmtime version or commit: 19.0.1

Operating system: Windows

Architecture: x86_64

Extra Info

Although all these tests use mobilenet v2, openvino model requires input data to be BGR format, with mean values: [127.5, 127.5, 127.5](openvino model description). ONNX model (used by both onnxruntime backend and winml backend) requires input data to be RGB format, in the range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] (onnx model description).

I'm not sure if we missed input data preprocessing, or it was processed somewhere else.

The test case for WinML backend uses a different input at this time. I'm trying to unifying the inputs for all backends so we can double check the correctness.

@jianjunz jianjunz added the bug Incorrect behavior in the current implementation that needs fixing label Apr 17, 2024
@abrown
Copy link
Collaborator

abrown commented Apr 18, 2024

cc: @devigned

@jianjunz
Copy link
Contributor Author

Wasi-nn example classification-component-onnx has images pre-processed here. Applying the same process for test inputs should fix this issue for onnxruntime and winml backends.

@devigned
Copy link
Contributor

devigned commented Apr 22, 2024

Nice catch, @jianjunz! Thank you for opening this issue.

I'm happy to open a PR for this, but if you are interested, I'd gladly give a PR from you a review. Are you interested in contributing?

@jianjunz jianjunz linked a pull request Apr 23, 2024 that will close this issue
@jianjunz
Copy link
Contributor Author

Thanks, David. #8442 is opened for fixing this issue. It also makes ONNX Runtime backend and WinML backend share the same test code because both of them use ONNX models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Incorrect behavior in the current implementation that needs fixing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants