Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please make it simple! #19793

Open
chengmengli06 opened this issue Apr 22, 2024 · 1 comment
Open

Please make it simple! #19793

chengmengli06 opened this issue Apr 22, 2024 · 1 comment
Labels
needs triage Waiting to be triaged by maintainers refactor

Comments

@chengmengli06
Copy link

chengmengli06 commented Apr 22, 2024

Outline & Motivation

One thing tensorflow falls behind pytorch is its too complex designs, while pytorch is much simpler. But when I start to use pytorch-lightning, I feel that it is another tensorflow. So I beg your guys make thing simple. For a simple saving checkpoint function, I search the code from ModelCheckpoint, to trainer.save_checkpoint, and then checkpoint_connector.save_checkpoint, and then trainer.strategy.save_checkpoint, where is the end? How to ensure correctness under such complex designs? Please make it simple!

Pitch

The strategy design in tensorflow is too complex, DDP is just a simple all reduce of gradients. But in strategy or keras, things become very complex, the function call stacks are very deep that we could hard understand where is it doing the actual all reduce? Even the users spend weeks of time, they may not figure out what you are actually doing because there is call from module a to b, then to c, then to a, then to b, then I give up.

Additional context

I suggest to implement things as what it is, stop over encapsulation, please follow the design patterns of pytorch and caffe, stop making simple functions complicated.

cc @justusschock @awaelchli

@chengmengli06 chengmengli06 added needs triage Waiting to be triaged by maintainers refactor labels Apr 22, 2024
@ryan597
Copy link
Contributor

ryan597 commented May 3, 2024

You can manually checkpoint with trainer.save_checkpoint("filepath"). You can use the ModelCheckpoint callback to take care of automatically creating checkpoints.

The implementation of lightning allows for simplified use across different accelerators, multiple devices and many tedious details from strategy, callbacks, logging and more are taken care of automatically leading to large reductions in boilerplate code from the user.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs triage Waiting to be triaged by maintainers refactor
Projects
None yet
Development

No branches or pull requests

2 participants