New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/amp #980
Feature/amp #980
Conversation
…r in official Apex examples)
Replace .shape[0] with len() call which is more compatible with exotic input batches (like for fine-tuning Faster RCNN, which requires sending list of image tensors)
# Conflicts: # CHANGELOG.md # catalyst/core/callbacks/optimizer.py
# Conflicts: # catalyst/dl/utils/quantization.py
# Conflicts: # catalyst/callbacks/optimizer.py
This pull request is now in conflicts. @BloodAxe, could you fix it? 🙏 |
@@ -115,9 +115,12 @@ def _scheduler_step( | |||
): | |||
if isinstance(scheduler, torch.optim.lr_scheduler.ReduceLROnPlateau): | |||
scheduler.step(reduced_metric) | |||
lr = scheduler.optimizer.param_groups[0]["lr"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @BloodAxe
Unfortunately, torch.optim.lr_scheduler.ReduceLROnPlateau
doesn't inherit from _LRScheduler
, so we cannot use neither get_last_lr()
nor get_lr()
for it
if not self.use_fast_zero_grad: | ||
maybe_recursive_call(self._optimizer, "zero_grad") | ||
else: | ||
maybe_recursive_call(self._optimizer, zero_grad) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the difference? 🤔
@BloodAxe could you please merge with master and check the codestyle? I think that could be enough ;) |
already merged with #1007 |
Before submitting
Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
Did you read the contribution guide?
Did you check the code style?
catalyst-make-codestyle && catalyst-check-codestyle
(pip install -U catalyst-codestyle
).Did you make sure to update the docs? We use Google format for all the methods and classes.
Did you check the docs with
make check-docs
?Did you write any new necessary tests?
Did you check that your code passes the unit tests
pytest .
?Did you add your new functionality to the docs?
Did you update the CHANGELOG?
added fast_zero_grad support to AMPOptimizerCallback
scheduler callback now gets learning rate using get_last_lr() method from the scheduler object (As PyTorch suggests)
added supported
find_unused_parameters
option to use with DDP to detect whether there are any unused parameters of the model.Description
Related Issue
Type of Change
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
PS