Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix data: imports #1211

Merged
merged 21 commits into from
May 16, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 4 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added

-
- Reinforcement learning tutorials ([#1205](https://github.com/catalyst-team/catalyst/pull/1205))
- customization demo ([#1207](https://github.com/catalyst-team/catalyst/pull/1207))

### Changed

-
- tests moved to `tests` folder ([#1208](https://github.com/catalyst-team/catalyst/pull/1208))

### Removed

Expand All @@ -21,6 +22,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Fixed

- customizing what happens in `train()` notebook ([#1203](https://github.com/catalyst-team/catalyst/pull/1203))
- transforms imports under catalyst.data ([#1211](https://github.com/catalyst-team/catalyst/pull/1211))


## [21.04.2] - 2021-04-30
Expand Down
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ import os
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -206,7 +206,7 @@ from torch import nn, optim
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -241,7 +241,7 @@ class CustomRunner(dl.Runner):
logits = self.model(x)
# compute the loss
loss = F.cross_entropy(logits, y)
# compute other metrics of interest
# compute the metrics
accuracy01, accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))
# log metrics
self.batch_metrics.update(
Expand Down Expand Up @@ -611,7 +611,7 @@ import os
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -668,7 +668,7 @@ import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down Expand Up @@ -733,7 +733,7 @@ from torch import nn, optim
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

# [!] teacher model should be already pretrained
Expand Down Expand Up @@ -895,7 +895,7 @@ from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten, GlobalMaxPool2d, Lambda
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor

latent_dim = 128
generator = nn.Sequential(
Expand Down Expand Up @@ -1038,7 +1038,7 @@ from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor

LOG_SCALE_MAX = 2
LOG_SCALE_MIN = -10
Expand Down Expand Up @@ -1167,7 +1167,7 @@ from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor


class CustomRunner(dl.IRunner):
Expand Down Expand Up @@ -1277,7 +1277,7 @@ from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor


class CustomRunner(dl.IRunner):
Expand Down Expand Up @@ -1393,7 +1393,7 @@ import torch
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST


Expand Down Expand Up @@ -1491,9 +1491,9 @@ best practices for your deep learning research and development.
- [20.04](https://catalyst-team.github.io/catalyst/v20.04/index.html), [20.04.1](https://catalyst-team.github.io/catalyst/v20.04.1/index.html), [20.04.2](https://catalyst-team.github.io/catalyst/v20.04.2/index.html)

### Notebooks
- Introduction tutorial "[Customizing what happens in `train`](./examples/notebooks/customizing_what_happens_in_train.ipynb)" [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customizing_what_happens_in_train.ipynb)
- Demo with [customization examples](./examples/notebooks/customization_tutorial.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customization_tutorial.ipynb)
- [Reinforcement Learning with Catalyst](./examples/notebooks/reinforcement_learning.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/reinforcement_learning.ipynb)
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customizing_what_happens_in_train.ipynb) Introduction tutorial "[Customizing what happens in `train`](./examples/notebooks/customizing_what_happens_in_train.ipynb)"
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customization_tutorial.ipynb) Demo with [customization examples](./examples/notebooks/customization_tutorial.ipynb)
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/reinforcement_learning.ipynb) [Reinforcement Learning with Catalyst](./examples/notebooks/reinforcement_learning.ipynb)

### Notable Blog Posts
- [Catalyst 2021–Accelerated PyTorch 2.0](https://medium.com/catalyst-team/catalyst-2021-accelerated-pytorch-2-0-850e9b575cb6?source=friends_link&sk=865d3c472cfb10379864656fedcfe762)
Expand Down
6 changes: 3 additions & 3 deletions catalyst/callbacks/metrics/segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ class IOUCallback(BatchMetricCallback):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down Expand Up @@ -138,7 +138,7 @@ class DiceCallback(BatchMetricCallback):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down Expand Up @@ -252,7 +252,7 @@ class TrevskyCallback(BatchMetricCallback):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down
2 changes: 1 addition & 1 deletion catalyst/callbacks/onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class OnnxCallback(Callback):
from torch.utils.data import DataLoader

from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten

Expand Down
2 changes: 1 addition & 1 deletion catalyst/callbacks/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ class QuantizationCallback(Callback):
from torch.utils.data import DataLoader

from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten

Expand Down
4 changes: 2 additions & 2 deletions catalyst/callbacks/tracing.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ class TracingCallback(Callback):
from torch.utils.data import DataLoader

from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten

Expand Down Expand Up @@ -94,7 +94,7 @@ def __init__(
from torch.utils.data import DataLoader

from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn.modules import Flatten

Expand Down
2 changes: 1 addition & 1 deletion catalyst/core/runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ class IRunner(ICallback, ILogger, ABC):
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor


class CustomRunner(dl.IRunner):
Expand Down
2 changes: 2 additions & 0 deletions catalyst/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,6 @@
HardClusterSampler,
)

from catalyst.data.transforms import Compose, Normalize, ToTensor, to_tensor, normalize

from catalyst.contrib.data import *
2 changes: 1 addition & 1 deletion catalyst/metrics/_additive.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ class AdditiveValueMetric(IMetric):
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
6 changes: 3 additions & 3 deletions catalyst/metrics/_segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ class IOUMetric(RegionBasedMetric):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down Expand Up @@ -365,7 +365,7 @@ class DiceMetric(RegionBasedMetric):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down Expand Up @@ -498,7 +498,7 @@ class TrevskyMetric(RegionBasedMetric):
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST
from catalyst.contrib.nn import IoULoss

Expand Down
10 changes: 5 additions & 5 deletions catalyst/runners/runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ class Runner(IRunner):
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -407,7 +407,7 @@ def train(
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -577,7 +577,7 @@ def predict_loader(
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -673,7 +673,7 @@ def on_loader_end(self, runner):
def evaluate_loader(
self,
loader: DataLoader,
callbacks: "Union[List[Callback], OrderedDict[str, Callback]]",
callbacks: "Union[List[Callback], OrderedDict[str, Callback]]" = None,
model: Optional[Model] = None,
seed: int = 42,
verbose: bool = False,
Expand Down Expand Up @@ -746,7 +746,7 @@ class SupervisedRunner(ISupervisedRunner, Runner):
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
2 changes: 1 addition & 1 deletion catalyst/runners/supervised.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ class ISupervisedRunner(IRunner):
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/ddp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Suppose you have the following pipeline with MNIST Classification:
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
4 changes: 2 additions & 2 deletions docs/faq/finetuning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ it's quite easy to create such complex pipeline with a few line of code:
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor


class CustomRunner(dl.IRunner):
Expand Down Expand Up @@ -144,7 +144,7 @@ Due to multiprocessing setup during distrubuted training, the multistage runs lo
from torch.utils.data import DataLoader, DistributedSampler
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor


class CustomRunner(dl.IRunner):
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Suppose you have the following classification pipeline:
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ You could log any new metric in a straightforward way:
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
8 changes: 4 additions & 4 deletions docs/faq/multi_components.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Suppose you have the following classification pipeline (in pure PyTorch):
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down Expand Up @@ -96,7 +96,7 @@ Multi-model example:
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

# <--- multi-model setup --->
Expand Down Expand Up @@ -191,7 +191,7 @@ Multi-optimizer example:
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

# <--- multi-model/optimizer setup --->
Expand Down Expand Up @@ -290,7 +290,7 @@ Multi-criterion example:
from torch.nn import functional as F
from torch.utils.data import DataLoader
from catalyst import dl, metrics, utils
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST

model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/optuna.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ You can easily use Optuna for hyperparameters optimization:
from torch import nn
from torch.utils.data import DataLoader
from catalyst import dl
from catalyst.data.transforms import ToTensor
from catalyst.data import ToTensor
from catalyst.contrib.datasets import MNIST


Expand Down