-
Notifications
You must be signed in to change notification settings - Fork 329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]training with coodbook loss #138
base: master
Are you sure you want to change the base?
Conversation
@zhu-han Current results show that training with code-book could converge faster and better. Training data: librispeech-clean-100h dataset. Here is the wer% on librispeech test clean;
|
Now three directions have been tried:
All following results are on test-clean with ctc-decoding. Conclusions of each direction:
To reproduce the 1.85% result, run command:
Link of wav2vec model used in this exp: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self Details results of direction 1:
Detail Results of direction 2:
|
TODO:
Ner future:
Further experiments: