Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't execute the generate function from AdversarialPatchPytorch #2344

Open
DJE98 opened this issue Dec 5, 2023 · 1 comment
Open

Can't execute the generate function from AdversarialPatchPytorch #2344

DJE98 opened this issue Dec 5, 2023 · 1 comment

Comments

@DJE98
Copy link

DJE98 commented Dec 5, 2023

Describe the bug
I can't execute the generate function from AdversarialPatchPytorch without Errors.

To Reproduce
The used code:

    def train(self, data_loader: DataLoader):
        images, labels = next(iter(data_loader))
        
        self.patch, self.mask = self.adversarial_patch.generate(x=np.array(images.cpu().numpy()), y=np.array(labels.cpu().numpy()))

The Stack Trace:

adversarial_patch_trainer.py 50 train
self.patch, self.mask = self.adversarial_patch.generate(x=images, y=labels)

adversarial_patch_pytorch.py 615 generate
_ = self._train_step(images=images, target=target, mask=None)

adversarial_patch_pytorch.py 190 _train_step
loss = self._loss(images, target, mask)

adversarial_patch_pytorch.py 234 _loss
predictions, target = self._predictions(images, mask, target)

adversarial_patch_pytorch.py 218 _predictions
patched_input = self._random_overlay(images, self._patch, mask=mask)

adversarial_patch_pytorch.py 306 _random_overlay
image_mask = torchvision.transforms.functional.resize(

functional.py 492 resize
return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias)

_functional_tensor.py 467 resize
img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias)

functional.py 3924 interpolate
raise TypeError(

TypeError:
expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]

Relevant code in the library:

    def _random_overlay(
        self,
        images: "torch.Tensor",
        patch: "torch.Tensor",
        scale: Optional[float] = None,
        mask: Optional["torch.Tensor"] = None,
    ) -> "torch.Tensor":
        import torch
        import torchvision

        # Ensure channels-first
        if not self.estimator.channels_first:
            images = torch.permute(images, (0, 3, 1, 2))

        nb_samples = images.shape[0]

        image_mask = self._get_circular_patch_mask(nb_samples=nb_samples)
        image_mask = image_mask.float()

        self.image_shape = images.shape[1:]

        smallest_image_edge = np.minimum(self.image_shape[self.i_h], self.image_shape[self.i_w])

        image_mask = torchvision.transforms.functional.resize(
            img=image_mask,
            size=(smallest_image_edge, smallest_image_edge),
            interpolation=2,
        )

Expected behavior
Normal execution

System information:

  • Ubuntu 23.10
  • Python 3.11
  • Art 1.16.0
  • PyTorch Library
@YXU300
Copy link

YXU300 commented Apr 8, 2024

Moving back to torch==2.0.1 solved the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants