Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]: VariadicSplit Op's CPU time is different between 2024.0.0 and 2023.0.0 #24288

Open
2 of 3 tasks
sitabulaixizawaluduo opened this issue Apr 29, 2024 · 6 comments
Assignees
Labels
category: CPU OpenVINO CPU plugin performance Performance related topics support_request

Comments

@sitabulaixizawaluduo
Copy link

sitabulaixizawaluduo commented Apr 29, 2024

OpenVINO Version

2024.0.0

Operating System

Ubuntu 22.04 (LTS)

Device used for inference

CPU

OpenVINO installation

Build from source

Programming Language

Python

Hardware Architecture

x86 (64 bits)

Model used

recommend

Model quantization

No

Target Platform

No response

Performance issue description

when I change ov version from 2023.0.0 to 2024.0.0. I used benchmark_app to test my model's performance, but I found when I set hint to "throughput" , FPS Decreased from 952 to 878. When I review the performance data, I find that the "VariadicSplit" operation had a CPU Time of 0 in version 2023.0.0, which is not the case in version 2024. What could be the reason for this?

Step-by-step reproduction

No response

Issue submission checklist

  • I'm reporting a performance issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@avitial avitial added the category: CPU OpenVINO CPU plugin label May 2, 2024
@YuChern-Intel YuChern-Intel self-assigned this May 3, 2024
@YuChern-Intel
Copy link

Please ensure that you're using the same benchmark parameters when comparing the performance between two different versions of benchmark_app.
For example, -nireq, -nstreams, -nthreads

If the scenario is still the same, please share your relevant model files.

@sitabulaixizawaluduo
Copy link
Author

Please ensure that you're using the same benchmark parameters when comparing the performance between two different versions of benchmark_app. For example, -nireq, -nstreams, -nthreads

If the scenario is still the same, please share your relevant model files.

I have set '-nireq 24, -nstreams 24, -nthreads 24',but the result is the same as before.

@YuChern-Intel
Copy link

Could you share your relevant model files?

@sitabulaixizawaluduo
Copy link
Author

Could you share your relevant model files?

`import numpy as np
import onnx
from onnx import helper
from onnx import AttributeProto, TensorProto, GraphProto

index = [1, 1, 1, 1, 1, 1, 1, 1, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30, 30, 30, 1, 1, 1, 1, 1, 1, 1, 1, 30, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30, 1, 30, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
split = np.array(index).astype(np.int32)

initializers= []

input_1 = helper.make_tensor_value_info('input_1', TensorProto.FLOAT, [256,279,81])
initializers = [onnx.helper.make_tensor(
name='split',
data_type=TensorProto.INT32,
dims=[96],
vals=split.flatten().tolist())]

outputs_list = []
for i in range(96):
outputs_list.append(helper.make_tensor_value_info('output_'+str(i+1), TensorProto.FLOAT, [256,index[i],81]))
attr = helper.make_attribute("", 1.)

node_def = onnx.helper.make_node(
"Split",
inputs=["input_1", 'split'],
outputs=["output_"+str(i+1) for i in range(96)],
axis=np.int32(1),
)
graph_def = helper.make_graph(
[node_def],
'test-model',
[input_1],
outputs_list,
initializer=initializers,
)
model_def = helper.make_model(graph_def, producer_name='onnx-example',opset_imports=[helper.make_opsetid("", 13)])
onnx.checker.check_model(model_def)
onnx.save(model_def, "signal_split_13_new.onnx")

`
Thanks for reply ! you can create a onnx file by this code, and use "mo" to creat ov file

@sitabulaixizawaluduo
Copy link
Author

This issue is related to #24412

@YuChern-Intel
Copy link

Can you check with the latest 2024.1 release to see whether it has the same issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: CPU OpenVINO CPU plugin performance Performance related topics support_request
Projects
None yet
Development

No branches or pull requests

4 participants