Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summarize model size in MegaBytes [WIP] #4810

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
137 commits
Select commit Hold shift + click to select a range
eb9cb3c
Add Google Colab badges (#5111)
shacharmirkin Dec 14, 2020
69123af
Fix hanging metrics tests (#5134)
tadejsv Dec 14, 2020
84bb9db
simplify changelog (#5135)
Borda Dec 14, 2020
fde972f
add copyright to tests (#5143)
Borda Dec 15, 2020
fe75c73
Update changelog, increment version (#5148)
SeanNaren Dec 15, 2020
748a74e
Prune CHANGELOG.md (#5151)
SeanNaren Dec 15, 2020
79565be
Fix saved filename in ModelCheckpoint if it already exists (#4861)
rohitgr7 Dec 16, 2020
afe5da7
Update isort config (#5142)
akihironitta Dec 16, 2020
b4d926b
Fix reset TensorRunningAccum (#5106)
VinhLoiIT Dec 16, 2020
94838d3
Fix hang in DDP HPC accelerators (#5157)
ananthsub Dec 16, 2020
bc4008b
merge 1.1.x branch (SN)
williamFalcon Dec 16, 2020
140c37a
support number for logging with sync_dist=True (#5080)
tchaton Dec 16, 2020
61e3981
Un-balanced logging properly supported (#5119)
tchaton Dec 16, 2020
8d1ca4c
[bugfix] remove nan loss in manual optimization (#5121)
tchaton Dec 16, 2020
81fd33b
[bug-fix] Metric reduction with Logging (#5150)
tchaton Dec 16, 2020
3910b28
Disable pl optimizer temporarily to fix AMP issues (#5163)
SeanNaren Dec 17, 2020
e5569a9
drop install FairScale for TPU (#5113)
Borda Dec 17, 2020
405a840
temporarily suspend all mergify rules (#5112)
Borda Dec 17, 2020
a94f662
prune ecosystem example (#5085)
Borda Dec 17, 2020
1b599ff
add doctests for example 1/n (#5079)
Borda Dec 17, 2020
b16441f
Document speed comparison (#2072)
Borda Dec 17, 2020
e721b1f
Prelease 1.1.2rc (#5171)
SeanNaren Dec 17, 2020
81070be
Fixed docs for WandbLogger (#5128)
hassiahk Dec 18, 2020
16e819e
update DALIClassificationLoader to not use deprecated arguments (#4925)
gan3sh500 Dec 18, 2020
5d2fa98
Github Actions deprecation (#5183)
InCogNiTo124 Dec 18, 2020
4c34855
[bugfix] Correct call to torch.no_grad (#5124)
8greg8 Dec 19, 2020
e89764e
feat(wandb): offset logging step when resuming (#5050)
borisdayma Dec 19, 2020
88b55e4
reduce verbosity level in drone ci (#5190)
awaelchli Dec 20, 2020
618580b
Remove Sourcerer (#5172)
rohitgr7 Dec 20, 2020
be3e870
skip multi-gpu test when running on single-gpu machine (#5186)
awaelchli Dec 20, 2020
c8eda3f
Update warning if ckpt directory is not empty (#5209)
rohitgr7 Dec 21, 2020
8ad7214
add make cmd - clean (#5204)
Borda Dec 21, 2020
bb6dfb6
remove unused rpc import in modelcheckpoint causing import error (#5198)
awaelchli Dec 21, 2020
a401fb3
add doctests for example 2/n segmentation (#5083)
Borda Dec 21, 2020
3bd6206
Update README.md
williamFalcon Dec 22, 2020
43f73fd
Update README.md
williamFalcon Dec 22, 2020
9a3c035
Tighten up mypy config (#5237)
alanhdu Dec 23, 2020
5820887
update for v1.1.2 (#5240)
Borda Dec 23, 2020
ae04311
[Bugfix] Add LightningOptimizer parity test and resolve AMP bug (#5191)
tchaton Dec 23, 2020
27f3f97
update chlog for future 1.1.3rc (#5242)
Borda Dec 23, 2020
1767350
[bugfix] Group defaults to WORLD if None (#5125)
8greg8 Dec 23, 2020
6adc1b3
add memory parity for PL vs Vanilla (#5170)
Borda Dec 23, 2020
c479351
releasing feature as nightly (#5233)
Borda Dec 23, 2020
b22b1c2
update PR template (#5206)
Borda Dec 23, 2020
9b3c6a3
skip some description from pypi (#5234)
Borda Dec 23, 2020
5651c9c
fix typo in Optimization (#5228)
BobAnkh Dec 24, 2020
1d53307
Fix typo in Trainer.test() (#5226)
JamesTrick Dec 24, 2020
b930b5f
Add TPU example (#5109)
rohitgr7 Dec 24, 2020
90c1c0f
Update README.md (#5018)
nightlessbaron Dec 24, 2020
8d8098c
Minor doc fixes (#5139)
rohitgr7 Dec 24, 2020
d1e97a4
Fix typo in doc (#5270)
cccntu Dec 26, 2020
9ebbfec
Trainer.test should return only test metrics (#5214)
tchaton Dec 28, 2020
eb1d61c
remove docs (#5287)
SkafteNicki Dec 28, 2020
0c7c9e8
Apply isort to `pl_examples/` (#5291)
akihironitta Dec 29, 2020
dabfeca
[Metrics] [Docs] Add section about device placement (#5280)
SkafteNicki Dec 29, 2020
4913cbb
Fix metric state reset (#5273)
tadejsv Dec 29, 2020
dd98a60
Fixed typo in docs for optimizer_idx (#5310)
sugatoray Dec 31, 2020
64163c2
[Docs] Mention that datamodules can also be used with `.test()` metho…
SkafteNicki Dec 31, 2020
ab7512d
refactor python in GH actions (#5281)
Borda Dec 31, 2020
d20fd8e
supports --num-nodes on DDPSequentialPlugin() (#5327)
haven-jeon Jan 2, 2021
724f105
update isort config (#5335)
Borda Jan 3, 2021
51af395
uniques docs artefact name (#5336)
Borda Jan 4, 2021
17a0784
black formatting and migrated to self.log logging in finetuning examp…
jspaezp Jan 4, 2021
0e593fb
Reordered sections for intuitive browsing. (e.g. limit_train_batches …
skim2257 Jan 4, 2021
15a400b
docs: logits -> probs in Accuracy metric documentation (#5340)
Kulikovpavel Jan 4, 2021
dd442b6
[Docs] update docs for resume_from_checkpoint (#5164)
rohitgr7 Jan 4, 2021
b0051e8
Add non-existing resume_from_checkpoint acceptance for auto-resubmit …
tarepan Jan 5, 2021
f740245
Disable checkpointing, earlystopping and logging with fast_dev_run (#…
rohitgr7 Jan 5, 2021
c7d0f4c
Add a check for optimizer attatched to lr_scheduler (#5338)
rohitgr7 Jan 5, 2021
371daea
Allow log_momentum for adaptive optimizers (#5333)
rohitgr7 Jan 5, 2021
062800a
Fix invalid value for weights_summary (#5296)
rohitgr7 Jan 5, 2021
d5b3678
[bug-fix] Trainer.test points to latest best_model_path (#5161)
tchaton Jan 5, 2021
a40e3a3
Change the classifier input from 2048 to 1000. (#5232)
LaserBit Jan 5, 2021
d568533
Updated metrics/classification/precision_recall.py (#5348)
abhik-99 Jan 5, 2021
410d67f
Existence check for hparams now uses underlying filesystem (#5250)
kandluis Jan 5, 2021
ec0fb7a
refactor imports of logger dependencies (#4860)
Borda Jan 5, 2021
6536ea4
FIX-5311: Cast to string `_flatten_dict` (#5354)
marload Jan 5, 2021
4d9db86
Prepare 1.1.3 release (#5365)
carmocca Jan 5, 2021
019e4ff
Add 1.1.4 section to CHANGELOG (#5378)
carmocca Jan 6, 2021
ee83731
Update sharded install to latest fairscale release, add reasoning why…
SeanNaren Jan 6, 2021
cc62435
docker: run ci only docker related files are changed (#5203)
Jan 6, 2021
4c6f36e
Fix pre-commit trailing-whitespace and end-of-file-fixer hooks. (#5387)
arnaudgelas Jan 7, 2021
72525f0
tests for legacy checkpoints (#5223)
Borda Jan 8, 2021
d510707
[bug-fix] Call transfer_batch_to_device in DDPlugin (#5195)
tchaton Jan 8, 2021
f2e99d6
deprecate enable_pl_optimizer as it is not restored properly (#5244)
tchaton Jan 8, 2021
a053d75
[bugfix] Logging only on `not should_accumulate()` during training (#…
tchaton Jan 9, 2021
bb5031b
bugfix: Resolve interpolation bug with Hydra (#5406)
tchaton Jan 9, 2021
f1e28d1
GH action - label conflicts (#5450)
Borda Jan 10, 2021
499d503
fix typos in validation_step and test_step docs (#5438)
thepooons Jan 11, 2021
92bbf2f
GH action - auto-update PRs (#5451)
Borda Jan 11, 2021
8748293
Add automatic optimization property setter to lightning module (#5169)
ananthsub Jan 11, 2021
f065ea6
populate some more legacy checkpoints (#5457)
Borda Jan 12, 2021
635df27
[BUG] Check environ before selecting a seed to prevent warning messag…
SeanNaren Jan 12, 2021
d30e316
[docs] Add ananthsub to core (#5476)
ananthsub Jan 12, 2021
9611a7f
update nightly & upgrade Twine (#5458)
Borda Jan 12, 2021
c00d570
ci: update recurent events (#5480)
Borda Jan 12, 2021
652df18
Increment version, update CHANGELOG.md (#5482)
SeanNaren Jan 12, 2021
1f6236a
fix generate checkpoint (#5489)
Borda Jan 12, 2021
1ec1d3e
update tests with new auto_opt api (#5466)
rohitgr7 Jan 12, 2021
a9377e3
[Docs] fix on_after_backward example (#5278)
rohitgr7 Jan 12, 2021
36198ec
fix typo in multi-gpu docs (#5402)
awaelchli Jan 13, 2021
4c78804
fix auto-label conditions (#5496)
Borda Jan 13, 2021
83b1ff4
pipeline release CI (#5494)
Borda Jan 13, 2021
d916973
Refactor setup_training and remove test_mode (#5388)
rohitgr7 Jan 13, 2021
94b7d84
add section & add testing ckpt 1.1.4 (#5495)
Borda Jan 14, 2021
71d5cc1
Fix visual progress bar bug / properly reset progress bar (#4579)
awaelchli Jan 14, 2021
24fb75a
reconfigure mergify (#5499)
Borda Jan 14, 2021
d15f7a0
Fix Wrong exception message (#5492)
lacrosse91 Jan 14, 2021
d62ca82
Tensorboard Docu about Hyperparams saving (#5158)
Skyy93 Jan 15, 2021
7f352cb
fix reinit_schedulers with correct optimizer (#5519)
rohitgr7 Jan 15, 2021
6926b84
[bugfix] Fix signature mismatch in DDPCPUHPCAccelerator's model_to_de…
ananthsub Jan 16, 2021
c80e45d
Fix val_check_interval with fast_dev_run (#5540)
rohitgr7 Jan 18, 2021
a56f745
Remove unused `beta` argument in precision/recall (#5532)
Jan 18, 2021
18d2ae8
Fix logging on_train_batch_end in a callback with multiple optimizers…
carmocca Jan 18, 2021
18bba25
fix command line run for refinforce_learn_qnet in pl_examples (#5414)
sidhantls Jan 19, 2021
389186c
Drop greetings comment (#5563)
carmocca Jan 19, 2021
486f682
Fix root node resolution in slurm environment
tobiasmaier Jan 19, 2021
3825ce4
fix argparse conflicting options error (#5569)
sidhantls Jan 19, 2021
088b352
Prepare 1.1.5 release (#5576)
carmocca Jan 19, 2021
f477c2f
Add new CHANGELOG section (#5580)
carmocca Jan 19, 2021
a376b65
:zap: Added initial setup to calculate model size
kartik4949 Nov 22, 2020
ef4d36b
:hammer: minor refactor
kartik4949 Nov 23, 2020
1863c4e
:zap: Model size for different input sizes
kartik4949 Nov 30, 2020
2fb4bdc
:zap: added tests
kartik4949 Nov 30, 2020
9fd363f
:bug: make model_size method
kartik4949 Nov 30, 2020
9a493f9
:bug: call model_size
kartik4949 Nov 30, 2020
419ecdf
:hammer: model size summary refactor
kartik4949 Nov 30, 2020
80cb699
:hammer: Simplified tests
kartik4949 Nov 30, 2020
9b812ec
:hammer: dict input support for model size
kartik4949 Nov 30, 2020
869098f
:hammer: use param_nums property for total_params calc.
kartik4949 Nov 30, 2020
b897ef9
:hammer: fix minor issues
kartik4949 Dec 1, 2020
ca730ff
:hammer: better Exception
kartik4949 Dec 1, 2020
aafd89d
:hammer: refactore and minor bug fixes
kartik4949 Dec 1, 2020
6fc8eb4
:hammer: doc test summary fix
kartik4949 Dec 1, 2020
937a94c
:zap: Only full mode support.
kartik4949 Dec 1, 2020
d42352b
:hammer: core memory refactor
kartik4949 Dec 1, 2020
63010a8
Simplified Model size
kartik4949 Jan 20, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
16 changes: 12 additions & 4 deletions .drone.yml
Expand Up @@ -30,15 +30,23 @@ steps:
MKL_THREADING_LAYER: GNU

commands:
- set -e
- python --version
- pip --version
- nvidia-smi
- pip install -r ./requirements/devel.txt --upgrade-strategy only-if-needed -v --no-cache-dir
- pip install git+https://${AUTH_TOKEN}@github.com/PyTorchLightning/lightning-dtrun.git@v0.0.2 -v --no-cache-dir
- pip install -r ./requirements/devel.txt --upgrade-strategy only-if-needed --no-cache-dir
- pip install git+https://${AUTH_TOKEN}@github.com/PyTorchLightning/lightning-dtrun.git@v0.0.2 --no-cache-dir
# when Image has defined CUDa version we can switch to this package spec "nvidia-dali-cuda${CUDA_VERSION%%.*}0"
# todo: temprarl fix till https://github.com/PyTorchLightning/pytorch-lightning/pull/4922 is resolved
- pip install --extra-index-url https://developer.download.nvidia.com/compute/redist "nvidia-dali-cuda100<0.27" --upgrade-strategy only-if-needed
- pip install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100 --upgrade-strategy only-if-needed
- pip list
# todo: remove unzip install after new nigtly docker is created
- apt-get update -qq
- apt-get install -y --no-install-recommends unzip
# get legacy checkpoints
- wget https://pl-public-data.s3.amazonaws.com/legacy/checkpoints.zip -P legacy/
- unzip -o legacy/checkpoints.zip -d legacy/
- ls -l legacy/checkpoints/
# testing...
- python -m coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --durations=25 # --flake8
# Running special tests
- sh tests/special_tests.sh
Expand Down
18 changes: 9 additions & 9 deletions .github/BECOMING_A_CORE_CONTRIBUTOR.md
@@ -1,14 +1,14 @@
# How to become a core contributor

Thanks for your interest in joining the Lightning team! We’re a rapidly growing project which is poised to become the go-to framework for DL researchers!
We're currently recruiting for a team of 5 core maintainers.
Thanks for your interest in joining the Lightning team! We’re a rapidly growing project which is poised to become the go-to framework for DL researchers!
We're currently recruiting for a team of 5 core maintainers.

As a core maintainer you will have a strong say in the direction of the project. Big changes will require a majority of maintainers to agree.

### Code of conduct
### Code of conduct
First and foremost, you'll be evaluated against [these core values](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md). Any code we commit or feature we add needs to align with those core values.

### The bar for joining the team
### The bar for joining the team
Lightning is being used to solve really hard problems at the top AI labs in the world. As such, the bar for adding team members is extremely high. Candidates must have solid engineering skills, have a good eye for user experience, and must be a power user of Lightning and PyTorch.

With that said, the Lightning team will be diverse and a reflection of an inclusive AI community. You don't have to be an engineer to contribute! Scientists with great usability intuition and PyTorch ninja skills are welcomed!
Expand Down Expand Up @@ -36,26 +36,26 @@ Pleasant/helpful tone.
- Code is NOT overly engineered or hard to read
- Ask yourself, could a non-engineer understand what’s happening here?
- Make sure new tests are written
- Is this NECESSARY for Lightning? There are some PRs which are just purely about adding engineering complexity which have no place in Lightning.
- Is this NECESSARY for Lightning? There are some PRs which are just purely about adding engineering complexity which have no place in Lightning.
Guidance
- Some other PRs are for people who are wanting to get involved and add something unnecessary. We do want their help though! So don’t approve the PR, but direct them to a Github issue that they might be interested in helping with instead!
- To be considered for core contributor, please review 10 PRs and help the authors land it on master. Once you've finished the review, ping me
- To be considered for core contributor, please review 10 PRs and help the authors land it on master. Once you've finished the review, ping me
for a sanity check. At the end of 10 PRs if your PR reviews are inline with expectations described above, then you can merge PRs on your own going forward,
otherwise we'll do a few more until we're both comfortable :)

#### Project directions
There are some big decisions which the project must make. For these I expect core contributors to have something meaningful to add if it’s their area of expertise.

#### Diversity
Lightning should reflect the broader community it serves. As such we should have scientists/researchers from
different fields contributing!
Lightning should reflect the broader community it serves. As such we should have scientists/researchers from
different fields contributing!

The first 5 core contributors will fit this profile. Thus if you overlap strongly with experiences and expertise as someone else on the team, you might have to wait until the next set of contributors are added.

#### Summary: Requirements to apply
The goal is to be inline with expectations for solving issues by the last one so you can do them on your own. If not, I might ask you to solve a few more specific ones.

- Solve 10+ Github issues.
- Solve 10+ Github issues.
- Create 5+ meaningful PRs which solves some reported issue - bug,
- Perform 10+ PR reviews from other contributors.

Expand Down
6 changes: 5 additions & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Expand Up @@ -10,11 +10,15 @@ assignees: ''

<!-- A clear and concise description of what the bug is. -->

## Please reproduce using [the BoringModel and post here](https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing)
## Please reproduce using the BoringModel


<!-- Please paste your BoringModel colab link here. -->

### To Reproduce

Use following [**BoringModel**](https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing) and post here

<!-- If you could not reproduce using the BoringModel and still think there's a bug, please post here -->

### Expected behavior
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/documentation.md
Expand Up @@ -12,7 +12,7 @@ assignees: ''
For typos and doc fixes, please go ahead and:

1. Create an issue.
2. Fix the typo.
2. Fix the typo.
3. Submit a PR.

Thanks!
8 changes: 4 additions & 4 deletions .github/ISSUE_TEMPLATE/how-to-question.md
Expand Up @@ -9,18 +9,18 @@ assignees: ''

## ❓ Questions and Help

### Before asking:
### Before asking:
1. Try to find answers to your questions in [the Lightning Forum!](https://forums.pytorchlightning.ai/)
2. Search for similar [issues](https://github.com/PyTorchLightning/pytorch-lightning/issues).
3. Search the [docs](https://pytorch-lightning.readthedocs.io/en/latest/).
2. Search for similar [issues](https://github.com/PyTorchLightning/pytorch-lightning/issues).
3. Search the [docs](https://pytorch-lightning.readthedocs.io/en/latest/).

<!-- If you still can't find what you need: -->

#### What is your question?

#### Code

<!-- Please paste a code snippet if your question requires it! -->
<!-- Please paste a code snippet if your question requires it! -->

#### What have you tried?

Expand Down
26 changes: 16 additions & 10 deletions .github/PULL_REQUEST_TEMPLATE.md
@@ -1,35 +1,41 @@
## What does this PR do?

<!--
IMPORTANT:
We separated bug-fix PRs and feature PRs and they shall land in master and release/1.X-dev accordingly.
By default all PR are targeted to master which is correct for bug-fixes, but need to be change for features.
If you miss it we can still fix it for you, just ping us... :]

Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context.
List any dependencies that are required for this change.

If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
-->

Fixes # (issue)
Fixes # (issue) <- this [links related issue to this PR](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)

## Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [ ] Did you read the [contributor guideline](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
- [ ] Was this discussed/approved via a GitHub issue? (not for typos and docs)
- [ ] Did you read the [contributor guideline](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md), **Pull Request** section?
- [ ] Did you make sure your PR does only one thing, instead of bundling different changes together?
- [ ] Did you make sure to update the documentation with your changes? (if necessary)
- [ ] Did you write any new necessary tests? (not for typos and docs)
- [ ] Did you verify new and existing tests pass locally with your changes?
- [ ] If you made a notable change (that affects users), did you update the [CHANGELOG](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)?
- [ ] Did you update the [CHANGELOG](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)? (not for typos, docs, test updates, or internal minor changes/refactorings)

<!-- For CHANGELOG separate each item in the unreleased section by a blank line to reduce collisions -->

## PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read [Review guidelines](https://github.com/PyTorchLightning/pytorch-lightning/wiki/Review-guidelines). In short, see the following bullet-list:
Before you start reviewing make sure you have read [Review guidelines](https://github.com/PyTorchLightning/pytorch-lightning/wiki/Review-guidelines). In short, see the following bullet-list:

- [ ] Is this pull request ready for review? (if not, please submit in draft mode)
- [ ] Check that all items from **Before submitting** are resolved
- [ ] Make sure the title is self-explanatory and the description concisely explains the PR
- [ ] Add labels and milestones (and optionally projects) to the PR so it can be classified; _Bugfixes should be including in bug-fix release milestones (m.f.X) and features should be included in (m.X.b) releases._

- [ ] Add labels and milestones (and optionally projects) to the PR so it can be classified
- [ ] **Check that target branch and milestone match!**


## Did you have fun?
Make sure you had fun coding 🙃
12 changes: 0 additions & 12 deletions .github/prepare-nightly_pkg-name.py

This file was deleted.

12 changes: 6 additions & 6 deletions .github/prepare-nightly_version.py
Expand Up @@ -2,15 +2,15 @@
import os
import re

PATH_ROOT = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
_PATH_ROOT = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
_PATH_INIT = os.path.join(_PATH_ROOT, 'pytorch_lightning', '__init__.py')

# get today date
now = datetime.datetime.now()
now_date = now.strftime("%Y%m%d")
PATH_INIT = os.path.join(PATH_ROOT, 'pytorch_lightning', '__init__.py')
print(f"prepare init '{PATH_INIT}' - replace version by {now_date}")
with open(PATH_INIT, 'r') as fp:
print(f"prepare init '{_PATH_INIT}' - replace version by {now_date}")
with open(_PATH_INIT, 'r') as fp:
init = fp.read()
init = re.sub(r'__version__ = [\d\.rc\'"]+', f'__version__ = "{now_date}"', init)
with open(PATH_INIT, 'w') as fp:
init = re.sub(r'__version__ = [\d\.\w\'"]+', f'__version__ = "{now_date}"', init)
with open(_PATH_INIT, 'w') as fp:
fp.write(init)
17 changes: 12 additions & 5 deletions .github/workflows/ci_dockers.yml
Expand Up @@ -2,11 +2,21 @@ name: CI build Docker
# https://www.docker.com/blog/first-docker-github-action-is-here
# https://github.com/docker/build-push-action
# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master, "release/*"] # include release branches like release/1.0.x
branches: [master, "release/*"] # include release branches like release/1.0.x
pull_request:
branches: [master, "release/*"]
paths:
- "dockers/**"
- "!dockers/README.md"
- "requirements/*.txt"
- "environment.yml"
- "requirements.txt"
- ".github/workflows/ci_dockers.yml"
- ".github/workflows/events-nightly.yml"
- ".github/workflows/release-docker.yml"
- "setup.py"

jobs:
build-PL:
Expand Down Expand Up @@ -55,7 +65,6 @@ jobs:
build-args: |
PYTHON_VERSION=${{ matrix.python_version }}
XLA_VERSION=${{ matrix.xla_version }}
cache-from: pytorchlightning/pytorch_lightning:base-xla-py${{ matrix.python_version }}-torch${{ matrix.xla_version }}
file: dockers/base-xla/Dockerfile
push: false
timeout-minutes: 50
Expand Down Expand Up @@ -96,7 +105,6 @@ jobs:
PYTHON_VERSION=${{ matrix.python_version }}
PYTORCH_VERSION=${{ matrix.pytorch_version }}
CUDA_VERSION=${{ steps.extend.outputs.CUDA }}
cache-from: pytorchlightning/pytorch_lightning:base-cuda-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }}
file: dockers/base-cuda/Dockerfile
push: false
timeout-minutes: 50
Expand Down Expand Up @@ -139,7 +147,6 @@ jobs:
PYTORCH_VERSION=${{ matrix.pytorch_version }}
PYTORCH_CHANNEL=${{ steps.extend.outputs.CHANNEL }}
CUDA_VERSION=${{ steps.extend.outputs.CUDA }}
cache-from: pytorchlightning/pytorch_lightning:base-conda-py${{ matrix.python_version }}-torch${{ matrix.pytorch_version }}
file: dockers/base-conda/Dockerfile
push: false
timeout-minutes: 50
28 changes: 17 additions & 11 deletions .github/workflows/ci_pkg-install.yml
Expand Up @@ -3,7 +3,7 @@ name: Install pkg
# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master, "release/*"] # include release branches like release/1.0.x
branches: [master, "release/*"]
pull_request:
branches: [master, "release/*"]

Expand All @@ -27,13 +27,13 @@ jobs:

- name: Prepare env
run: |
pip install check-manifest "twine==1.13.0"
pip install check-manifest "twine==3.2" setuptools wheel

- name: Create package
run: |
check-manifest
# python setup.py check --metadata --strict
python setup.py sdist
python setup.py sdist bdist_wheel

- name: Check package
run: |
Expand All @@ -46,12 +46,18 @@ jobs:
# this is just a hotfix because of Win cannot install it directly
pip install -r requirements.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html

- name: Install package
- name: Install | Uninstall package - archive
run: |
# install as archive
pip install dist/*.tar.gz
cd ..
python -c "import pytorch_lightning as pl ; print(pl.__version__)"
pip uninstall -y pytorch-lightning

- name: Install | Uninstall package - wheel
run: |
# pip install virtualenv
# virtualenv vEnv --system-site-packages
# source vEnv/bin/activate
pip install dist/*
cd .. & python -c "import pytorch_lightning as pl ; print(pl.__version__)"
# deactivate
# rm -rf vEnv
# install as wheel
pip install dist/*.whl
cd ..
python -c "import pytorch_lightning as pl ; print(pl.__version__)"
pip uninstall -y pytorch-lightning
4 changes: 2 additions & 2 deletions .github/workflows/ci_test-base.yml
@@ -1,9 +1,9 @@
name: CI base testing
name: CI basic testing

# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master, "release/*"] # include release branches like release/1.0.x
branches: [master, "release/*"]
pull_request:
branches: [master, "release/*"]

Expand Down
17 changes: 14 additions & 3 deletions .github/workflows/ci_test-conda.yml
Expand Up @@ -3,7 +3,7 @@ name: PyTorch & Conda
# see: https://help.github.com/en/actions/reference/events-that-trigger-workflows
on: # Trigger the workflow on push or pull request, but only for the master branch
push:
branches: [master, "release/*"] # include release branches like release/1.0.x
branches: [master, "release/*"]
pull_request:
branches: [master, "release/*"]

Expand Down Expand Up @@ -34,10 +34,21 @@ jobs:
# todo this probably does not work with docker images, rather cache dockers
uses: actions/cache@v2
with:
path: Datasets # This path is specific to Ubuntu
# Look to see if there is a cache hit for the corresponding requirements file
path: Datasets
key: pl-dataset

- name: Pull checkpoints from S3
# todo: consider adding coma caching, but ATM all models have less then 100KB
run: |
# todo: remove unzip install after new nigtly docker is created
apt-get update -qq
apt-get install -y --no-install-recommends unzip
# enter legacy and update checkpoints from S3
cd legacy
curl https://pl-public-data.s3.amazonaws.com/legacy/checkpoints.zip --output checkpoints.zip
unzip -o checkpoints.zip
ls -l checkpoints/

- name: Tests
run: |
# NOTE: run coverage on tests does not propagare faler status for Win, https://github.com/nedbat/coveragepy/issues/1003
Expand Down