Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable unit test for wasi-nn WinML backend. #8442

Merged
merged 1 commit into from
Jun 11, 2024

Conversation

jianjunz
Copy link
Contributor

This test was disabled because GitHub Actions Windows Server image doesn't have desktop experience included. But it looks like we can have a standalone WinML binary downloaded from ONNX Runtime project.

Wasi-nn WinML backend and ONNX Runtime backend now share the same test code since they accept the same input, and they are expected to produce the same result. Pre-processing and post-processing are added to nn_image_classification_onnx for improving its accuracy.

This change also make wasi-nn WinML backend as a default feature as it's covered by test now.

Fixes #8391.

@jianjunz jianjunz requested review from a team as code owners April 23, 2024 09:46
@jianjunz jianjunz requested review from fitzgen and removed request for a team April 23, 2024 09:46
@fitzgen fitzgen requested review from abrown and removed request for fitzgen April 23, 2024 16:53
Copy link
Contributor

@devigned devigned left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before a detailed review, we need to determine if adding these dependencies are ok.

@@ -20,3 +20,6 @@ futures = { workspace = true, default-features = false, features = ['alloc'] }
url = { workspace = true }
sha2 = "0.10.2"
base64 = "0.21.0"
# image and ndarray are used by nn_image_classification_onnx for image preprocessing.
image = { version = "0.24.6", default-features = false, features = ["jpeg"] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had avoided adding image and ndarray dependencies in favor of using the raw image used in the openvino test. What you did with the dog image is pretty close to what I had originally. @abrown, WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image processing and post-processing are copied from your classification-component-onnx example. If new dependencies are not allowed, we can do preprocessing offline, just put a RGB file here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I like showing the image prep / processing as it provides guidance to folks learning how to get started. In fact, it'd probably be helpful to add more comments / documentation to samples to explain what is happening. I think we, as practitioners, may be too accustomed to the transformations we regularly do to realize how odd they may appear to the naive observer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My original idea was to use an image as input for all backends, so we can cross check the results for correctness.

To avoid introducing extra dependencies, the image is replaced with processed tensor data (000000062808.rgb). I believe this is the same image for openvino backend (tensorization for openvino).

@jianjunz jianjunz force-pushed the winml-ci branch 4 times, most recently from cd8785a to f1b030c Compare April 29, 2024 08:29
@elliottt elliottt removed the request for review from a team May 2, 2024 20:13
@jianjunz jianjunz force-pushed the winml-ci branch 7 times, most recently from 7e52717 to 78f0450 Compare June 5, 2024 08:14
This test was disabled because GitHub Actions Windows Server image
doesn't have desktop experience included. But it looks like we can have
a standalone WinML binary downloaded from ONNX Runtime project.

Wasi-nn WinML backend and ONNX Runtime backend now share the same test
code as they accept the same input, and they are expected to produce the
same result.

This change also make wasi-nn WinML backend as a default feature.

prtest:full
let mut results: Vec<InferenceResult> = probabilities
.iter()
.skip(1)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this line because it's likely to be a workaround for this specific openvino model only. If mobilenet-v1-0.25-128 is the model for openvino test, it may have an additional class 0 for background. The shape (1, 1001) also shows it has one more value than ONNX model (1, 1000).

Copy link
Contributor

@devigned devigned left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Tests results are the same across the impls.

Thank you for documenting how the raw image was created. Great work!

@abrown tag, you're it

Copy link
Collaborator

@abrown abrown left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@abrown abrown added this pull request to the merge queue Jun 11, 2024
Merged via the queue into bytecodealliance:main with commit 881191a Jun 11, 2024
119 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Inconsistent results for wasi-nn different backends
3 participants