Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/amp #980

Closed
wants to merge 33 commits into from
Closed

Feature/amp #980

wants to merge 33 commits into from

Conversation

BloodAxe
Copy link
Contributor

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)

  • Did you read the contribution guide?

  • Did you check the code style? catalyst-make-codestyle && catalyst-check-codestyle (pip install -U catalyst-codestyle).

  • Did you make sure to update the docs? We use Google format for all the methods and classes.

  • Did you check the docs with make check-docs?

  • Did you write any new necessary tests?

  • Did you check that your code passes the unit tests pytest . ?

  • Did you add your new functionality to the docs?

  • Did you update the CHANGELOG?

  • added fast_zero_grad support to AMPOptimizerCallback

  • scheduler callback now gets learning rate using get_last_lr() method from the scheduler object (As PyTorch suggests)

  • added supported find_unused_parameters option to use with DDP to detect whether there are any unused parameters of the model.

Description

Related Issue

Type of Change

  • Examples / docs / tutorials / contributors update
  • Bug fix (non-breaking change which fixes an issue)
  • Improvement (non-breaking change which improves an existing feature)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

PS

  • I know, that I could join slack for pull request discussion.

Replace .shape[0] with len() call which is more compatible with exotic input batches (like for fine-tuning Faster RCNN, which requires sending list of image tensors)
# Conflicts:
#	CHANGELOG.md
#	catalyst/core/callbacks/optimizer.py
# Conflicts:
#	catalyst/dl/utils/quantization.py
# Conflicts:
#	catalyst/callbacks/optimizer.py
@pep8speaks
Copy link

pep8speaks commented Oct 29, 2020

Hello @BloodAxe! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-10-29 09:59:40 UTC

@mergify
Copy link

mergify bot commented Nov 4, 2020

This pull request is now in conflicts. @BloodAxe, could you fix it? 🙏

@@ -115,9 +115,12 @@ def _scheduler_step(
):
if isinstance(scheduler, torch.optim.lr_scheduler.ReduceLROnPlateau):
scheduler.step(reduced_metric)
lr = scheduler.optimizer.param_groups[0]["lr"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @BloodAxe
Unfortunately, torch.optim.lr_scheduler.ReduceLROnPlateau doesn't inherit from _LRScheduler, so we cannot use neither get_last_lr() nor get_lr() for it

@and-kul and-kul mentioned this pull request Nov 8, 2020
1 task
Comment on lines +375 to +378
if not self.use_fast_zero_grad:
maybe_recursive_call(self._optimizer, "zero_grad")
else:
maybe_recursive_call(self._optimizer, zero_grad)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference? 🤔

@Scitator
Copy link
Member

Scitator commented Nov 9, 2020

@BloodAxe could you please merge with master and check the codestyle? I think that could be enough ;)

@Scitator
Copy link
Member

already merged with #1007

@Scitator Scitator closed this Nov 30, 2020
@BloodAxe BloodAxe deleted the feature/amp branch May 28, 2021 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants