New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement _VariableFunctionsClass.empty of torch #331
Comments
Once this is implemented, I guess we can add |
Hi Team! I am trying to figure out how I can allocate memory but not initialize the values in the tensor. I am assuming I could do something like |
@k223kim Excellent question! So you can implement this like by adding a new lightning-thunder/thunder/core/prims.py Line 2661 in 06964ac
then implement the
and finally define The difference between lightning-thunder/thunder/torch/__init__.py Line 379 in 06964ac
Let me know if you have any additional questions! |
Hey @mruberry! Thank you so much for the detailed guidance. I have submitted a draft PR with (hopefully) everything you have mentioned. However, I actually do want to understand how this implementation work in more detail. I would appreciate it if you can answer these questions to help my understanding which will be extremely helpful in future works on Thunder.
Also, I have a question regarding the test case of Thanks again for your time reviewing! 鈿★笍 |
Will do!
Functions like After the program is constructed and compiled, it is executed. As part of the compilation process, the symbols like So, program construction --> compilation --> execution.
thunder is interested in understanding properties of operations, like how input metadata maps to output metadata, or how to create a grad formula. A lot of these properties can be implicitly defined by decomposing more complicated operators (like the ones in Additionally, some executors, like nvFuser, are interested in breaking down operations into a series of simpler operations that can be fused. Without the prims, each torch operation would effectively be a primitive, so executors like nvFuser would need special execution logic for each torch operation. The core language (the "clang"), is intended to be common operations that are more usable than primitive operations and facilitate the creation of language definitions like the torch language definition, or the numpy language definition.
That sounds great! |
Thanks so much @mruberry! This helped me greatly with my understanding regard how things work for sure! Appreciate your help as always. 馃殌 |
馃殌 Feature
Implement
torch.empty
Motivation
NeMo Vision Transformer
cc @apaz-cli @tfogal
The text was updated successfully, but these errors were encountered: