Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge AMPOptimizerCallback and OptimizerCallback #1007

Merged
merged 8 commits into from Nov 25, 2020

Conversation

and-kul
Copy link
Contributor

@and-kul and-kul commented Nov 22, 2020

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contribution guide?
  • Did you check the code style? catalyst-make-codestyle && catalyst-check-codestyle (pip install -U catalyst-codestyle).
  • Did you make sure to update the docs? We use Google format for all the methods and classes.
  • Did you check the docs with make check-docs?
  • Did you write any new necessary tests?
  • Did you check that your code passes the unit tests pytest . ?
  • Did you add your new functionality to the docs?
  • Did you update the CHANGELOG?

Description

  • Remove AMPOptimizerCallback and move its functionality into OptimizerCallback
  • Change logic of fp16=True in runner.train:
If fp16=True, params by default will be:
    * ``{"amp": True}`` if torch>=1.6.0
    * ``{"apex": True, "opt_level": "O1", ...}`` if torch<1.6.0
  • OptimizerCallback now have 2 new args: use_amp and use_apex. They can be set manually or inferred from runner.experiment.distributed_params
  • decouple_weight_decay by default is now False
    • This change is discussible, but it seems like this is a safer assumption (for example we now have AdamW optimizer in PyTorch itself, which implements decoupling logic)

Type of Change

  • Examples / docs / tutorials / contributors update
  • Bug fix (non-breaking change which fixes an issue)
  • Improvement (non-breaking change which improves an existing feature)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

PS

  • I know, that I could join slack for pull request discussion.

@pep8speaks
Copy link

pep8speaks commented Nov 22, 2020

Hello @and-kul! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-11-25 15:20:47 UTC

@@ -30,6 +31,33 @@
from catalyst.utils.tracing import save_traced_model, trace_model


def resolve_bool_fp16(fp16: Union[Dict, bool]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we make it _resolve_bool_fp16? or move to catalyst/runner/functional.py? I just want to keep runner.py clean and clear :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

catalyst/runners/runner.py Outdated Show resolved Hide resolved
and-kul and others added 2 commits November 25, 2020 18:15
Co-authored-by: Sergey Kolesnikov <scitator@gmail.com>
@Scitator Scitator merged commit 5cedbc4 into catalyst-team:master Nov 25, 2020
@Scitator Scitator mentioned this pull request Nov 30, 2020
15 tasks
@and-kul and-kul deleted the optimizer_callback branch December 15, 2020 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants