Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task overlapped with armnn delegate #15

Open
ek9852 opened this issue Jan 20, 2021 · 5 comments
Open

Task overlapped with armnn delegate #15

ek9852 opened this issue Jan 20, 2021 · 5 comments

Comments

@ek9852
Copy link

ek9852 commented Jan 20, 2021

We should have a cmake for this project and allow pure linux to run and build a libneutralnetwork.so directly. tensroflow lite nnapi delegate will dlopen libnutralnetwork.so in pure linux environment (non-android)

There is no need for a duplicated and separated delegate for tensorflow lite.
https://github.com/ARM-software/armnn/tree/branches/armnn_20_11/delegate

@MatthewARM
Copy link
Collaborator

I think for non-Android, using a direct delegate is quite a bit simpler to set up than getting NNNAPI and the NN HAL driver service to work.

Using the direct delegate will also work on older Android devices where NNAPI is not present, or an old version, so it can have advantages even on Android. Especially for third-party applications which are not in control of which versions of NN HAL drivers have been installed.

What advantages do you see NNAPI having over using a direct delegate?

@ek9852
Copy link
Author

ek9852 commented Jan 20, 2021

1.) We don't need to modify the application to use nnapi. e.g.
I can just download tensorflow lite prebuilt benchmark tool to test the performance without any application code changes.

2.) We don't need a NN HAL driver service on non-Andoird.
All need is android-nn-driver to generate a libneuralnetworks.so for dlopen by the tensorflow lite.
tensorflow lite already support native linux nnapi.

3.) I don't think the setup will be harder when using nnapi instead of direct delegate. We nly need to build and install ibneuralnetworks.so to /usr/lib, the tensorflow lite will dlopen it if it is found automatiacally.

4.) There are still unimplemented layers/support in android-nn-driver and direct delegate.
The andoirdd-nn-drvier implementation seems to be more completed though.
Right now , if the layer cannot be run in the GPU, it will fallback to CPU, and a single model can be break down into multiple parts . The model can still be run, but will end up with poor performance due to latency and transfer of data between gpu/cpu for those cpu fallback layers. I think we should focus the development time on implementation those un-implementation layer first. Right now I am doing a benchmark using direct delegate , but the performance is poor for models with unimplemented accel layers. Those are the models I am currently testing with:
https://ai-benchmark.com/tests.html

5.) For older Android support , application can call or use armnn directly.

@ek9852
Copy link
Author

ek9852 commented Jan 20, 2021

There is nothing wrong have a direct delegate and an nnapi backend for nnapi deleage.
But we need to maintain 2 different codebase, and leave the user confuse about which one to use. I just want to have a positive discussion here only.

There are something need to be done for android-nn-driver to build for pure linux.
since pure linux don't have libneuralnetwork_common, we need to write/copy those api ANeuralNetworksModel_ from libneuralnetwork_common for pure linux implementation.
Also need a wrapper for android logging mechanism in pure linux environment.

@ek9852
Copy link
Author

ek9852 commented Mar 26, 2021

@MatthewARM FYI Google already have out of Android (chrome os support for nnapi) which should be able to use in pure linux directly.
https://chromium.googlesource.com/chromiumos/platform2/+/master/nnapi

@MatthewARM
Copy link
Collaborator

While I agree that NNAPI could be made to work on (non-Android) Linux, it doesn't seem to be a supported or promoted option within the NNAPI project, for example I've not seen any official guides on how to set it up.

So for the moment the path of least resistance for the Arm NN project is to have an NNAPI driver for Android system integrators, and a TensorFlow Lite delegate for other kinds of deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants