Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling openai/whisper-large-v3 using olive-ai-0.6.0 [onnxruntime-gpu: 1.17.1] on Intel CPU/GPU is not supporting #1134

Open
vijayaVTT opened this issue May 2, 2024 · 2 comments

Comments

@vijayaVTT
Copy link

(whisper_dml) C:\Users\Local_Admin\Olive\examples\whisper>python test_transcription.py --config whisper_gpu_fp16.json
Traceback (most recent call last):
File "test_transcription.py", line 128, in
output_text = main()
File "test_transcription.py", line 94, in main
olive_model = ONNXModelHandler(**output_model_json["config"])
KeyError: 'config'

Screenshot 2024-05-02 175337

The steps were produced using link examples/whisper

Expecting simple inferenced output

did not change any config, implied default only.
Kindly help me fixing this issues.
Thanks in advance :)

@jambayk
Copy link
Contributor

jambayk commented May 2, 2024

Can you share the logs from when you ran the workflow? Looks like the workflow failed and no model was generated.

@vijayaVTT
Copy link
Author

Can you share the logs from when you ran the workflow? Looks like the workflow failed and no model was generated.

log.txt
Whisper DML Olive.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants