We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(whisper_dml) C:\Users\Local_Admin\Olive\examples\whisper>python test_transcription.py --config whisper_gpu_fp16.json Traceback (most recent call last): File "test_transcription.py", line 128, in output_text = main() File "test_transcription.py", line 94, in main olive_model = ONNXModelHandler(**output_model_json["config"]) KeyError: 'config'
The steps were produced using link examples/whisper
Expecting simple inferenced output
did not change any config, implied default only. Kindly help me fixing this issues. Thanks in advance :)
The text was updated successfully, but these errors were encountered:
Can you share the logs from when you ran the workflow? Looks like the workflow failed and no model was generated.
Sorry, something went wrong.
log.txt Whisper DML Olive.md
No branches or pull requests
(whisper_dml) C:\Users\Local_Admin\Olive\examples\whisper>python test_transcription.py --config whisper_gpu_fp16.json
Traceback (most recent call last):
File "test_transcription.py", line 128, in
output_text = main()
File "test_transcription.py", line 94, in main
olive_model = ONNXModelHandler(**output_model_json["config"])
KeyError: 'config'
The steps were produced using link examples/whisper
Expecting simple inferenced output
did not change any config, implied default only.
Kindly help me fixing this issues.
Thanks in advance :)
The text was updated successfully, but these errors were encountered: