Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply existing patches without initialising attack object #2349

Open
mxrothwell opened this issue Dec 11, 2023 · 1 comment
Open

Apply existing patches without initialising attack object #2349

mxrothwell opened this issue Dec 11, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@mxrothwell
Copy link

Hi,

I am looking for some advice on how to neatly integrate applying patches as an image transformation in my ML workflow. I have created a series of adversarial patches using the following code:

from art.attacks.evasion import AdversarialPatchTensorFlowV2
from art.estimators.classification import TensorFlowV2Classifier

# create classifier using art package
classifier = TensorFlowV2Classifier(
    model=model,
    loss_object=loss_object,
    train_step=train_step,
    nb_classes=N_CLASSES,
    input_shape=IMAGE_SHAPE,
    clip_values=(0, 1),
)

# create attack object
attack = AdversarialPatchTensorFlowV2(
    classifier=classifier, max_iter=100, scale_min=0.2, scale_max=0.8
)

# generate patch
patch, mask = attack.generate(x=images, y=labels)

I then want to save the created patch for later use in other scripts. In the above script, I would use the apply patch method like so:

attack.apply_patch(images, scale=0.3)

This method is only available as part of the AdversarialPatchTensorFlowV2 class, which would mean initialising the classifier and attack objects every time I want to apply that patch to an image. This seems sub-optimal as apply_patch is a pretty independent method within that class. Am I missing something obvious here?

I am looking to neatly integrate apply_patch into a keras style augmentation workflow. My current work around is to create a class that inherits the methods from AdversarialPatchTensorFlowV2 and sets any class attributes it needs:

class ApplyAdverserialPatch(AdversarialPatchTensorFlowV2, Layer):
    """
    Implement AdversarialPatchTensorFlowV2.apply_patch as Layer transformation.

    This avoids having to create classifier (TensorFlowV2Classifier) and attack
    (AdversarialPatchTensorFlowV2) objects for each patch application.

    Note
    ====
    Apply_patch can only be run in eager mode and therefore precludes out-of-the-box GPU
    parallelisation using tensorflow. This will likely result in worse performance, when
    using the maping method of execution. See for more details:
    https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map

    Examples
    --------
    ```
    # for np.ndarrays or tf.Tensors of shape (N, H, W, C)
    transform = ApplyAdverserialPatch(image_shape=(H, W, C), patch)
    transformed_image = transform(images)

    # add augmentation to sequential model
    preproccessing_step = keras.Sequential(
        [
            keras.layers.RandomRotation(factor=0.02),
            ApplyAdverserialPatch(image_shape, patch),
        ]
    )
    transformed_image = preproccessing_step(images)

    # map via tf.Data.Dataset (this method requirea tf.py_function)
    dataset = (
        tf.data.Dataset.from_tensor_slices(images)
        .batch(5)
        .map(lambda x: tf.py_function(func=transform, inp=[x], Tout=tf.float32),
        num_parallel_calls=tf.data.AUTOTUNE,  # allows multi-threading (for speed)
        deterministic=False  # allows returned order to be different (for speed)
    )
    ```
    """

    def __init__(self, image_shape: tuple[int], patch: np.ndarray | tf.Tensor):
        """Init specific attributes and methods from inherited classes."""
        # init Layer class so that transformation can be added to keras.Sequential
        Layer.__init__(self, dynamic=True)

        # set attributes needed for each call execution
        self.patch = patch
        self.image_shape = image_shape
        self.patch_shape = image_shape

        # set maximum rotation to AdversarialPatchTensorFlowV2 default
        self.rotation_max = 22.5

        # set index for height and width for patch
        self.i_h_patch = 0
        self.i_w_patch = 1

        # set index for height and width for image
        self.nb_dims = len(image_shape)
        if self.nb_dims == 3:
            self.i_h = 0
            self.i_w = 1
        elif self.nb_dims == 4:
            self.i_h = 1
            self.i_w = 2

    def call(self, image: np.ndarray | tf.Tensor) -> tf.Tensor:
        """Layer class `call` method applies transformation to inputs."""
        return self.apply_patch(image, scale=0.3, patch_external=self.patch)

This works fine (see doc strings for example usages), but my class will be very susceptible to updates in the art packages going forward. Alternatively, I could strip the apply_patch into my own function. What is the most robust solution to this problem?

Thanks in advance for any help on this - much appreciated!

@beat-buesser
Copy link
Collaborator

Hi @mxrothwell Thank you very much for interest in ART! I think this a great question and we should keep this issue as a feature request. At the moment ART provides a function in art.utils.insert_transformed_patch to insert patches into a plane defines by 4 coordinates.

@beat-buesser beat-buesser added the enhancement New feature or request label Jan 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants