Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate num_processes,gpus, tpu_cores, and ipus from the Trainer constructor #11040

Merged
merged 37 commits into from
Apr 10, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
212a799
Deprecate num_processes, gpus, tpu_cores, and ipus from the Trainer c…
daniellepintz Dec 11, 2021
5368b14
fix accel_con tests
daniellepintz Dec 15, 2021
5124e3e
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Dec 22, 2021
841aed7
change removal to 2.0
daniellepintz Dec 22, 2021
ad02054
fix
daniellepintz Dec 22, 2021
ca0b6a2
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Jan 20, 2022
151da46
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 20, 2022
3186c2c
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Feb 4, 2022
e87b79d
Merge branch 'accel_con' of github.com:daniellepintz/pytorch-lightnin…
daniellepintz Feb 4, 2022
315b5e9
fix tpu test
daniellepintz Feb 10, 2022
f6819c3
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Feb 10, 2022
71e8011
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 10, 2022
6cb44d6
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 2, 2022
87accbd
Merge branch 'accel_con' of github.com:daniellepintz/pytorch-lightnin…
daniellepintz Mar 2, 2022
5970989
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 2, 2022
33b959a
fix gpu test
daniellepintz Mar 2, 2022
4ecc5b0
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 26, 2022
a6ab1a3
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 27, 2022
38fc4d6
update final tests
daniellepintz Mar 27, 2022
cf1e981
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 28, 2022
45fbee6
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 28, 2022
d792bf2
fix more tests
daniellepintz Mar 28, 2022
369152f
Merge branch 'accel_con' of github.com:daniellepintz/pytorch-lightnin…
daniellepintz Mar 28, 2022
1a07d7b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 28, 2022
f5db9f0
update dep strings
daniellepintz Mar 28, 2022
97fcdc3
update test_combined_data_loader_validation_test
daniellepintz Mar 28, 2022
54d3f15
mock ipus and tpus
daniellepintz Mar 28, 2022
87761f0
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 29, 2022
69c1756
update to 1.7 and fix test
daniellepintz Mar 29, 2022
99af06c
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Mar 29, 2022
97ecfa8
fix tests
daniellepintz Mar 29, 2022
5f93ffc
fix test_v2_0_0_deprecated_tpu_cores
daniellepintz Mar 29, 2022
1d477f0
fix test
daniellepintz Mar 29, 2022
b8e00c5
Merge branch 'master' into accel_con
awaelchli Apr 3, 2022
6763a46
add missing deprecation messages in docs
awaelchli Apr 3, 2022
7dc4b50
update usage in nemo examples
awaelchli Apr 3, 2022
45b85b9
merge master
awaelchli Apr 10, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated `ModelIO.on_hpc_{save/load}` in favor of `CheckpointHooks.on_{save/load}_checkpoint` ([#10911](https://github.com/PyTorchLightning/pytorch-lightning/pull/10911))


- Deprecated `num_processes`, `gpus`, `tpu_cores,` and `ipus` from the `Trainer` constructor in favor of using the `accelerator` and `devices` arguments ([#11040](https://github.com/PyTorchLightning/pytorch-lightning/pull/11040))


### Removed

- Removed deprecated parameter `method` in `pytorch_lightning.utilities.model_helpers.is_overridden` ([#10507](https://github.com/PyTorchLightning/pytorch-lightning/pull/10507))
Expand Down
20 changes: 20 additions & 0 deletions pytorch_lightning/trainer/connectors/accelerator_connector.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,12 +116,32 @@ def __init__(

self._init_deterministic(deterministic)

if num_processes:
rank_zero_deprecation(
f"Setting `Trainer(num_processes={num_processes})` is deprecated in v1.6 and will be removed"
" in v1.8. Please use `Trainer(accelerator='cpu', devices={num_processes})` instead."
daniellepintz marked this conversation as resolved.
Show resolved Hide resolved
)
self.num_processes = num_processes
self.devices = devices
# `gpus` is the input passed to the Trainer, whereas `gpu_ids` is a list of parsed gpu ids.
if gpus:
rank_zero_deprecation(
f"Setting `Trainer(gpus={gpus})` is deprecated in v1.6 and will be removed"
" in v1.8. Please use `Trainer(accelerator='gpu', devices={gpus})` instead."
)
self.gpus = gpus
self.parallel_device_ids = gpu_ids
if tpu_cores:
rank_zero_deprecation(
f"Setting `Trainer(tpu_cores={tpu_cores})` is deprecated in v1.6 and will be removed"
" in v1.8. Please use `Trainer(accelerator='tpu', devices={tpu_cores})` instead."
)
self.tpu_cores = tpu_cores
if ipus:
rank_zero_deprecation(
f"Setting `Trainer(ipus={ipus})` is deprecated in v1.6 and will be removed"
" in v1.8. Please use `Trainer(accelerator='ipu', devices={ipus})` instead."
)
self.ipus = ipus
self.num_nodes = num_nodes
self.sync_batchnorm = sync_batchnorm
Expand Down
24 changes: 20 additions & 4 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,12 @@ def __init__(
gradient_clip_algorithm: Optional[str] = None,
process_position: int = 0,
num_nodes: int = 1,
num_processes: int = 1,
num_processes: int = 1, # TODO: Remove in 1.8
devices: Optional[Union[List[int], str, int]] = None,
gpus: Optional[Union[List[int], str, int]] = None,
gpus: Optional[Union[List[int], str, int]] = None, # TODO: Remove in 1.8
auto_select_gpus: bool = False,
tpu_cores: Optional[Union[List[int], str, int]] = None,
ipus: Optional[int] = None,
tpu_cores: Optional[Union[List[int], str, int]] = None, # TODO: Remove in 1.8
ipus: Optional[int] = None, # TODO: Remove in 1.8
log_gpu_memory: Optional[str] = None, # TODO: Remove in 1.7
progress_bar_refresh_rate: Optional[int] = None, # TODO: remove in v1.7
enable_progress_bar: bool = True,
Expand Down Expand Up @@ -262,6 +262,10 @@ def __init__(

gpus: Number of GPUs to train on (int) or which GPUs to train on (list or str) applied per node

.. deprecated:: v1.6
``gpus`` has been deprecated in v1.6 and will be removed in v1.8.
Please use ``accelerator='gpu'`` and ``devices=x`` instead.

gradient_clip_val: The value at which to clip gradients. Passing ``gradient_clip_val=None`` disables
gradient clipping. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before.

Expand Down Expand Up @@ -349,6 +353,10 @@ def __init__(

num_processes: Number of processes for distributed training with ``accelerator="cpu"``.

.. deprecated:: v1.6
``num_processes`` has been deprecated in v1.6 and will be removed in v1.8.
Please use ``accelerator='cpu'`` and ``devices=x`` instead.

num_sanity_val_steps: Sanity check runs n validation batches before starting the training routine.
Set it to `-1` to run all batches in all validation dataloaders.

Expand Down Expand Up @@ -383,8 +391,16 @@ def __init__(

tpu_cores: How many TPU cores to train on (1 or 8) / Single TPU to train on [1]

.. deprecated:: v1.6
``tpu_cores`` has been deprecated in v1.6 and will be removed in v1.8.
Please use ``accelerator='tpu'`` and ``devices=x`` instead.

ipus: How many IPUs to train on.

.. deprecated:: v1.6
``ipus`` has been deprecated in v1.6 and will be removed in v1.8.
Please use ``accelerator='ipu'`` and ``devices=x`` instead.

track_grad_norm: -1 no tracking. Otherwise tracks that p-norm. May be set to 'inf' infinity-norm. If using
Automatic Mixed Precision (AMP), the gradients will be unscaled before logging them.

Expand Down
24 changes: 24 additions & 0 deletions tests/deprecated_api/test_remove_1-8.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from pytorch_lightning.utilities.enums import DeviceType, DistributedType
from pytorch_lightning.utilities.imports import _TORCHTEXT_LEGACY
from tests.helpers.boring_model import BoringModel
from tests.helpers.runif import RunIf
from tests.helpers.torchtext_utils import get_dummy_torchtext_data_iterator


Expand Down Expand Up @@ -106,3 +107,26 @@ def on_hpc_load(self):
match=r"Method `LightningModule.on_hpc_load` is deprecated in v1.6 and will be removed in v1.8."
):
trainer.fit(load_model)


def test_v1_8_0_deprecated_num_processes(tmpdir):
with pytest.deprecated_call(match=r"is deprecated in v1.6 and will be removed in v1.8."):
_ = Trainer(default_root_dir=tmpdir, num_processes=2)


@RunIf(gpu=True)
daniellepintz marked this conversation as resolved.
Show resolved Hide resolved
def test_v1_8_0_deprecated_gpus(tmpdir):
with pytest.deprecated_call(match=r"is deprecated in v1.6 and will be removed in v1.8."):
_ = Trainer(default_root_dir=tmpdir, gpus=2)


@RunIf(tpu=True)
def test_v1_8_0_deprecated_tpu_cores(tmpdir):
with pytest.deprecated_call(match=r"is deprecated in v1.6 and will be removed in v1.8."):
_ = Trainer(default_root_dir=tmpdir, tpu_cores=1)


@RunIf(ipu=True)
def test_v1_8_0_deprecated_ipus(tmpdir):
with pytest.deprecated_call(match=r"is deprecated in v1.6 and will be removed in v1.8."):
_ = Trainer(default_root_dir=tmpdir, ipus=2)