You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to ask if there is a minimal loading pre-trained model of BEiT because I want to use BEiT as a backbone in my research about multi-label classification. I don't need all the code in this repo but only the loading code for BEiT, but currently the loading code is so confusing for me, I think you should provide a torch hub model so that everyone can easily access to your model more.
The text was updated successfully, but these errors were encountered:
Hi @thunanguyen , thanks for the good suggestion! Huggingface is working on merging BEiT models in their model hub (huggingface/transformers#12994 (comment) ). It would be much easier to load the checkpoints with user-friendly APIs.
Great! Will try it immediately! One more question though. Do you think the use of a Squeeze layer like in SqueezeBERT can make the model smaller and faster, @donglixp? Since your large model is too large.
I want to ask if there is a minimal loading pre-trained model of BEiT because I want to use BEiT as a backbone in my research about multi-label classification. I don't need all the code in this repo but only the loading code for BEiT, but currently the loading code is so confusing for me, I think you should provide a torch hub model so that everyone can easily access to your model more.
The text was updated successfully, but these errors were encountered: