Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query regarding Model architecture #81

Open
bkumardevan07 opened this issue Feb 11, 2021 · 1 comment
Open

Query regarding Model architecture #81

bkumardevan07 opened this issue Feb 11, 2021 · 1 comment

Comments

@bkumardevan07
Copy link

bkumardevan07 commented Feb 11, 2021

Hi, thanks for your contribution ...
I just had some questions :

  1. In usual transformer implementation, I don't see any mention of convolution layers (self conv blocks that you have used) in attention, neither in the paper, I see a clear model architecture explanation (please inform me if I am missing), I see just a small subsection where they have compared it with tacotron. Could you tell me where did you get this implementation from?? Any paper ?? Discussions ???

2.You are concatenating query vector with attention in mha blocks. Is there any discussion somewhere?? (Paper ?? Discussions??) What happened without query concatenation??

  1. You have used negative values for the stop end vector, but the decoder prenet uses relu activations. Although it still learned, isn't it good to change that??

Thanks

@cfrancesco
Copy link
Contributor

Hi,

  1. you can find conv layers replacing dense layers after attention in fastspeech, for example
  2. we found that this helps building attention, although with more recent improvements might not be necessary anymore
  3. interesting observation! However those values are just in the middle of the range (which is -4 to 4), so it should not be an issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants