Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][DO NOT MERGE] Update version of torch, lightning, transformers in multimodal #2479

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflow_scripts/env_setup.sh
Expand Up @@ -21,7 +21,7 @@ function setup_mxnet_gpu {
}

function setup_torch_gpu {
python3 -m pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
python3 -m pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update to 1.13.1+ (see my other comment)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Will do it later.

}

function install_common {
Expand Down
15 changes: 8 additions & 7 deletions multimodal/setup.py
Expand Up @@ -32,24 +32,25 @@
"seqeval<=1.2.2",
"evaluate<=0.3.0",
"accelerate>=0.9,<0.14",
"tensorboard<2.12.0",
"timm<0.7.0",
"torch>=1.9,<1.13",
"torchvision<0.14.0",
"torchtext<0.14.0",
"torch>=1.9,<1.14",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll need to update tabular's fastai extra_dependencies key since it also requires torch. Maybe we move torch to the _setup_utils.py file so we ensure consistent versions across the package?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security concern: 1.12.x have critical RCE CVEs open against it. pytorch is not planning to fix it in 1.12.x, I suggest capping it from the bottom to >=1.13.1?

"torchvision<0.15.0",
"torchtext<0.15.0",
"fairscale>=0.4.5,<=0.4.6",
"scikit-image>=0.19.1,<0.20.0",
"smart_open>=5.2.1,<5.3.0",
"pytorch_lightning>=1.7.4,<1.8.0",
"pytorch_lightning>=1.7.4,<1.9.0",
"text-unidecode<=1.3",
"torchmetrics>=0.8.0,<0.9.0",
"transformers>=4.23.0,<4.24.0",
"torchmetrics>=0.8.0,<0.11.0",
"transformers>=4.23.0,<4.26.0",
"nptyping>=1.4.4,<1.5.0",
"omegaconf>=2.1.1,<2.2.0",
"sentencepiece>=0.1.95,<0.2.0",
f"autogluon.core[raytune]=={version}",
f"autogluon.features=={version}",
f"autogluon.common=={version}",
"pytorch-metric-learning>=1.3.0,<1.4.0",
"pytorch-metric-learning>=1.3.0,<1.7.0",
"nlpaug>=1.1.10,<=1.1.10",
"nltk>=3.4.5,<4.0.0",
"openmim>0.1.5,<=0.2.1",
Expand Down
3 changes: 2 additions & 1 deletion multimodal/src/autogluon/multimodal/data/templates.py
Expand Up @@ -640,7 +640,8 @@ def read_from_file(self) -> Dict:
"Please ignore this warning if you are creating new prompts for this dataset."
)
return {}
yaml_dict = yaml.safe_load(open(self.yaml_path, "r"))
with open(self.yaml_path, "r") as f:
yaml_dict = yaml.safe_load(f)
return yaml_dict[self.TEMPLATES_KEY]

def write_to_file(self) -> None:
Expand Down
5 changes: 3 additions & 2 deletions multimodal/src/autogluon/multimodal/predictor.py
Expand Up @@ -892,8 +892,9 @@ def _hyperparameter_tune(self, hyperparameter_tune_kwargs, resources, **_fit_arg
)

ray_tune_adapter = AutommRayTuneAdapter()
if try_import_ray_lightning():
ray_tune_adapter = AutommRayTuneLightningAdapter()
# Do not use ray lightning.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add context

# if try_import_ray_lightning():
# ray_tune_adapter = AutommRayTuneLightningAdapter()
search_space = _fit_args.get("hyperparameters", dict())
metric = "val_" + _fit_args.get("validation_metric_name")
mode = _fit_args.get("minmax_mode")
Expand Down
9 changes: 8 additions & 1 deletion multimodal/src/autogluon/multimodal/utils/checkpoint.py
Expand Up @@ -2,6 +2,7 @@
import os
import re
import shutil
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union

import pytorch_lightning as pl
Expand All @@ -10,13 +11,19 @@
from pytorch_lightning.utilities.cloud_io import atomic_save, get_filesystem
from pytorch_lightning.utilities.cloud_io import load as pl_load
from pytorch_lightning.utilities.rank_zero import rank_zero_warn
from pytorch_lightning.utilities.types import _METRIC, _PATH
from torch import Tensor
from torchmetrics import Metric

from ..constants import AUTOMM, DEEPSPEED_STRATEGY

logger = logging.getLogger(AUTOMM)


_PATH = Union[str, Path]
_NUMBER = Union[int, float]
_METRIC = Union[Metric, Tensor, _NUMBER]


def average_checkpoints(
checkpoint_paths: List[str],
):
Expand Down