You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just wanted to track this here: There seem to be some WER degradation in some setup by @Marvin84 occuring in TensorFlow 2.14 and not in earlier versions (although this specific setup only was tested with TF 2.3). From some feedback from the group, in general TF 2.8 was reported to be a good/safe version.
The comparison is done under the same exact training conditions for a Conformer diphone factored hybrid HMM, Viterbi trained for 15 epochs on Librispeech 960h, using an alignment of a from-scratch monophone posterior-HMM. Both decodings use a 4-gram LM. Both use a recent RETURNN version. The decoding has been tuned in a fair manner for both results.
On dev-other, the models trained with TF 2.3 / Ubuntu 16 and TF2.14 / Ubuntu 22 have 6.6% and 7.7% WER, respectively.
The text was updated successfully, but these errors were encountered:
I just wanted to track this here: There seem to be some WER degradation in some setup by @Marvin84 occuring in TensorFlow 2.14 and not in earlier versions (although this specific setup only was tested with TF 2.3). From some feedback from the group, in general TF 2.8 was reported to be a good/safe version.
From @Marvin84:
The comparison is done under the same exact training conditions for a Conformer diphone factored hybrid HMM, Viterbi trained for 15 epochs on Librispeech 960h, using an alignment of a from-scratch monophone posterior-HMM. Both decodings use a 4-gram LM. Both use a recent RETURNN version. The decoding has been tuned in a fair manner for both results.
On dev-other, the models trained with TF 2.3 / Ubuntu 16 and TF2.14 / Ubuntu 22 have 6.6% and 7.7% WER, respectively.
The text was updated successfully, but these errors were encountered: