Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contrib/adamp #942

Merged
merged 7 commits into from Sep 22, 2020
Merged

Contrib/adamp #942

merged 7 commits into from Sep 22, 2020

Conversation

Scitator
Copy link
Member

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contribution guide?
  • Did you check the code style? catalyst-make-codestyle && catalyst-check-codestyle (pip install -U catalyst-codestyle).
  • Did you make sure to update the docs? We use Google format for all the methods and classes.
  • Did you check the docs with make check-docs?
  • Did you write any new necessary tests?
  • Did you add your new functionality to the docs?
  • Did you update the CHANGELOG?
  • You can use 'Login as guest' to see Teamcity build logs.

Description

Related Issue

Type of Change

  • Examples / docs / tutorials / contributors update
  • Bug fix (non-breaking change which fixes an issue)
  • Improvement (non-breaking change which improves an existing feature)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

@pep8speaks
Copy link

pep8speaks commented Sep 21, 2020

Hello @Scitator! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-09-22 19:34:24 UTC

class SGDP(Optimizer):
"""Implements SGDP algorithm.

The SGDP variant was proposed in `Slowing Down the Weight Norm Increase in Momentum-based Optimizers`_.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
E501 line too long (107 > 79 characters)

class SGDP(Optimizer):
"""Implements SGDP algorithm.

The SGDP variant was proposed in `Slowing Down the Weight Norm Increase in Momentum-based Optimizers`_.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
W505 doc line too long (107 > 79 characters)

https://arxiv.org/abs/2006.08217
"""

def __init__(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
D107 Missing docstring in init

return perturb, wd

def step(self, closure=None):
"""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
DAR201 Missing "Returns" in Docstring: - return


import torch
import torch.nn.functional as F
from torch.optim.optimizer import Optimizer, required

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
F401 'torch.optim.optimizer.required' imported but unused

running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay coefficient (default: 1e-2)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
W505 doc line too long (80 > 79 characters)

wd_ratio (float): relative weight decay applied on scale-invariant
parameters compared to that applied on scale-variant parameters
(default: 0.1)
nesterov (boolean, optional): enables Nesterov momentum (default: False)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
E501 line too long (80 > 79 characters)

wd_ratio (float): relative weight decay applied on scale-invariant
parameters compared to that applied on scale-variant parameters
(default: 0.1)
nesterov (boolean, optional): enables Nesterov momentum (default: False)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
W505 doc line too long (80 > 79 characters)

https://arxiv.org/abs/2006.08217
"""

def __init__(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
D107 Missing docstring in init

return perturb, wd

def step(self, closure=None):
"""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pep8] reported by reviewdog 🐶
DAR201 Missing "Returns" in Docstring: - return

@Scitator Scitator merged commit 8bde0d7 into master Sep 22, 2020
@mergify mergify bot deleted the contrib/adamp branch September 22, 2020 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants