Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto_select_gpus cannot handle being passed accelerator="gpu", devices=devices #12590

Closed
daniellepintz opened this issue Apr 3, 2022 · 1 comment
Assignees
Labels
accelerator: cuda Compute Unified Device Architecture GPU bug Something isn't working
Milestone

Comments

@daniellepintz
Copy link
Contributor

daniellepintz commented Apr 3, 2022

馃悰 Bug

I am trying to update test_trainer_with_gpus_options_combination_at_available_gpus_env in #12589 in preparation for #11040, but it is failing with the following stack trace:

tests/trainer/properties/test_auto_gpu_select.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pytorch_lightning/utilities/argparse.py:339: in insert_env_defaults
    return fn(self, **kwargs)
pytorch_lightning/trainer/trainer.py:486: in __init__
    self._accelerator_connector = AcceleratorConnector(
pytorch_lightning/trainer/connectors/accelerator_connector.py:194: in __init__
    self._set_parallel_devices_and_init_accelerator()
pytorch_lightning/trainer/connectors/accelerator_connector.py:512: in _set_parallel_devices_and_init_accelerator
    self._parallel_devices = self.accelerator.get_parallel_devices(self._devices_flag)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

devices = None

    @staticmethod
    def get_parallel_devices(devices: List[int]) -> List[torch.device]:
        """Gets parallel devices for the Accelerator."""
>       return [torch.device("cuda", i) for i in devices]
E       TypeError: 'NoneType' object is not iterable

pytorch_lightning/accelerators/gpu.py:82: TypeError

cc @justusschock @kaushikb11 @awaelchli @akihironitta @rohitgr7

@daniellepintz daniellepintz added the needs triage Waiting to be triaged by maintainers label Apr 3, 2022
@kaushikb11 kaushikb11 self-assigned this Apr 3, 2022
@kaushikb11 kaushikb11 added the accelerator: cuda Compute Unified Device Architecture GPU label Apr 3, 2022
@akihironitta akihironitta removed the needs triage Waiting to be triaged by maintainers label Apr 4, 2022
@awaelchli awaelchli added this to the 1.6.x milestone Apr 11, 2022
@awaelchli awaelchli added the bug Something isn't working label Apr 11, 2022
@awaelchli
Copy link
Member

Was fixed in #12608, just not properly linked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accelerator: cuda Compute Unified Device Architecture GPU bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants