Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix multiple Sphinx warnings & docstrings #985

Closed
wants to merge 95 commits into from
Closed
Show file tree
Hide file tree
Changes from 90 commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
09af170
Fix "WARNING: Title underline too short." message in rst files
ProGamerGov Jun 30, 2022
aa953fd
Fix Sphinx bullet list spacing warning
ProGamerGov Jun 30, 2022
294a01a
Fix automodule path
ProGamerGov Jul 5, 2022
9c24eb9
Fix more Sphinx warnings
ProGamerGov Jul 5, 2022
a5c5eb7
Docstring fix: string -> str
ProGamerGov Jul 17, 2022
3c13769
function -> callable
ProGamerGov Jul 17, 2022
61c2f88
or a tuple of -> or tuple of
ProGamerGov Jul 17, 2022
0ace2f2
Fix doc style
ProGamerGov Jul 18, 2022
152aae8
Fix device_ids docstring types
ProGamerGov Jul 18, 2022
ce913dc
Any -> any
ProGamerGov Jul 18, 2022
f4ae3d4
tuple of tuples -> tuple of tuple
ProGamerGov Jul 19, 2022
4732c6a
Fix docstring Sphinx warnings
ProGamerGov Jul 19, 2022
2ffbbd4
dictionary -> dict
ProGamerGov Jul 19, 2022
2da07cd
Merge branch 'master' into master-rst-fixes
ProGamerGov Jul 20, 2022
d4bb345
list(torch.nn.Module) -> list of torch.nn.Module
ProGamerGov Jul 20, 2022
f2ad85b
Minor docstring improvements
ProGamerGov Jul 20, 2022
eb756ab
Resolve some more Sphinx warnings
ProGamerGov Jul 21, 2022
46e4dd1
Improve doc formatting
ProGamerGov Jul 21, 2022
fe15929
Fix more Sphinx errors
ProGamerGov Jul 21, 2022
0546b6c
Tensor -> tensor
ProGamerGov Jul 21, 2022
4c9d6e7
Fix docstring type formatting
ProGamerGov Jul 21, 2022
a4f16b3
remove 's' from int & float types
ProGamerGov Jul 21, 2022
81b157e
slices -> slice
ProGamerGov Jul 21, 2022
abb3ee7
numpy.array -> numpy.ndarray
ProGamerGov Jul 21, 2022
ba3a0b4
Fix more doc types
ProGamerGov Jul 21, 2022
d3f0431
Don't link directly to arXiv PDF files
ProGamerGov Jul 21, 2022
7530b25
http -> https
ProGamerGov Jul 21, 2022
dcf363a
Fix minor issues
ProGamerGov Jul 21, 2022
31e453b
http -> https
ProGamerGov Jul 21, 2022
0a091d8
Capitalize Any & Callable in docstrings
ProGamerGov Jul 21, 2022
12e0250
Fix more Sphinx warnings
ProGamerGov Jul 21, 2022
7838aaf
Fix: E501 line too long
ProGamerGov Jul 21, 2022
31f5d5a
Replace accidental tabs with spaces
ProGamerGov Jul 21, 2022
cedcffc
Fix spacing issue
ProGamerGov Jul 21, 2022
ccdd660
Fix formatting
ProGamerGov Jul 21, 2022
9675b91
Fix warning
ProGamerGov Jul 21, 2022
6cb5369
Fix warning
ProGamerGov Jul 21, 2022
221a7d9
Fix warning
ProGamerGov Jul 21, 2022
2c0331f
Improve docstring spacing & types
ProGamerGov Jul 22, 2022
f14c460
Fix ReadMe
ProGamerGov Jul 22, 2022
24bfcdf
Fix Robustness docs
ProGamerGov Jul 22, 2022
7f0457b
Add type improvements to `conf.py`
ProGamerGov Jul 23, 2022
7b9156e
Set autodoc_preserve_defaults to True
ProGamerGov Jul 25, 2022
512e2d4
Escape '.' in regex str replacement
ProGamerGov Jul 25, 2022
46acbb1
Fix docstring type
ProGamerGov Jul 25, 2022
646302f
Add Sphinx refs to docstrings
ProGamerGov Jul 25, 2022
55ec6b5
Fix NoiseTunnel docstring research paper list
ProGamerGov Jul 26, 2022
f6591ca
Merge branch 'master' into master-rst-fixes
ProGamerGov Jul 26, 2022
262e9ea
Spelling fixes
ProGamerGov Jul 26, 2022
b275762
Add missing function to Sphinx API docs
ProGamerGov Jul 27, 2022
e0b3281
Update conf.py
ProGamerGov Jul 28, 2022
ca093a3
Merge branch 'master' into master-rst-fixes
ProGamerGov Jul 28, 2022
93f24ee
Remove `autodoc_preserve_defaults` from `conf.py`
ProGamerGov Jul 30, 2022
9fe2827
Fix `conf.py` issues
ProGamerGov Jul 30, 2022
c0599c9
Remove the `autodoc_process_docstring` function
ProGamerGov Jul 31, 2022
1b4d23a
Improve docs
ProGamerGov Jul 31, 2022
abd69cc
Merge branch 'master' into master-rst-fixes
ProGamerGov Aug 1, 2022
1a02e03
Fix mistakes
ProGamerGov Aug 1, 2022
e5a2b5d
Improve docs
ProGamerGov Aug 1, 2022
379d4e4
Merge branch 'master' into master-rst-fixes
ProGamerGov Aug 1, 2022
f7ac156
Fix docstring types
ProGamerGov Aug 1, 2022
7fca541
Improve docstrings
ProGamerGov Aug 2, 2022
049bca2
Rename `algorithms.md` to `attribution_algorithms.md` as per feedback
ProGamerGov Aug 2, 2022
0be4dff
Improve docstrings & type hints
ProGamerGov Aug 3, 2022
fdfa858
Don't link directly to arxiv PDFs
ProGamerGov Aug 4, 2022
6f84b64
Fix class variable position for Sphinx
ProGamerGov Aug 4, 2022
cc1bcb8
Readd `autodoc_process_docstring` for `Callable` & `Any`
ProGamerGov Aug 7, 2022
0f209ef
Handle unused attribution base methods in Sphinx docs
ProGamerGov Aug 8, 2022
87d5300
Improve Sphinx warnings
ProGamerGov Aug 8, 2022
7018ff8
Fix docstring
ProGamerGov Aug 11, 2022
430ae74
Fix lint error
ProGamerGov Aug 11, 2022
e639bea
Fix spelling
ProGamerGov Aug 11, 2022
da7423c
Merge branch 'master' into master-rst-fixes
ProGamerGov Aug 12, 2022
b12c6c1
Fix grammar & spelling
ProGamerGov Aug 12, 2022
b300c69
Fix doctring & add type hints
ProGamerGov Aug 14, 2022
ef48de6
Fix docstrings
ProGamerGov Aug 14, 2022
6410f16
Fix more docstrings
ProGamerGov Aug 14, 2022
16ae5f5
Merge branch 'master-rst-fixes' of https://github.com/ProGamerGov/cap…
ProGamerGov Aug 14, 2022
de24100
Iterable types & docstring fixes
ProGamerGov Aug 14, 2022
cd709b0
Fix docstring type
ProGamerGov Aug 14, 2022
5e29522
Remove unnecessary function
ProGamerGov Aug 15, 2022
82373fd
Improve typing replacement string precision
ProGamerGov Aug 16, 2022
2e47c54
Merge branch 'master' into master-rst-fixes
ProGamerGov Aug 19, 2022
ed9cb19
Merge branch 'master' into master-rst-fixes
ProGamerGov Aug 24, 2022
7a5f194
Docstring type formatting changes
ProGamerGov Sep 1, 2022
f15a808
tensor & tensors -> Tensor
ProGamerGov Sep 1, 2022
8752c85
torch.Tensor -> Tensor
ProGamerGov Sep 1, 2022
cf133d9
Merge branch 'master' into master-rst-fixes
ProGamerGov Sep 1, 2022
a99ff47
Add temp code for testing
ProGamerGov Sep 1, 2022
a2d1678
Revert change
ProGamerGov Sep 1, 2022
14d1af5
Remove approx methods from index.rst
ProGamerGov Sep 2, 2022
ee68259
Add Tensor to autodoc_process_docstring
ProGamerGov Sep 2, 2022
6abb35c
Return Types: tensor & tensors -> Tensor
ProGamerGov Sep 2, 2022
ef3592c
Merge branch 'master' into master-rst-fixes
ProGamerGov Sep 16, 2022
da0fab9
Fix newly introduced Mypy error
ProGamerGov Sep 16, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
21 changes: 10 additions & 11 deletions README.md
Expand Up @@ -159,8 +159,7 @@ model.eval()
Next, we need to define simple input and baseline tensors.
Baselines belong to the input space and often carry no predictive signal.
Zero tensor can serve as a baseline for many tasks.
Some interpretability algorithms such as `Integrated
Gradients`, `Deeplift` and `GradientShap` are designed to attribute the change
Some interpretability algorithms such as `IntegratedGradients`, `Deeplift` and `GradientShap` are designed to attribute the change
between the input and baseline to a predictive class or a value that the neural
network outputs.

Expand Down Expand Up @@ -472,23 +471,23 @@ You can watch the recorded talk [here](https://www.youtube.com/watch?v=ayhBHZYje
* `SmoothGrad`: [SmoothGrad: removing noise by adding noise, Daniel Smilkov et al. 2017](https://arxiv.org/abs/1706.03825)
* `NoiseTunnel`: [Sanity Checks for Saliency Maps, Julius Adebayo et al. 2018](https://arxiv.org/abs/1810.03292)
* `NeuronConductance`: [How Important is a neuron?, Kedar Dhamdhere et al. 2018](https://arxiv.org/abs/1805.12233)
* `LayerConductance`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/pdf/1807.09946.pdf)
* `DeepLift`, `NeuronDeepLift`, `LayerDeepLift`: [Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017](https://arxiv.org/pdf/1704.02685.pdf) and [Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018](https://openreview.net/pdf?id=Sy21R9JAW)
* `NeuronIntegratedGradients`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/pdf/1807.09946.pdf)
* `LayerConductance`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/abs/1807.09946)
* `DeepLift`, `NeuronDeepLift`, `LayerDeepLift`: [Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017](https://arxiv.org/abs/1704.02685) and [Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018](https://openreview.net/pdf?id=Sy21R9JAW)
* `NeuronIntegratedGradients`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/abs/1807.09946)
* `GradientShap`, `NeuronGradientShap`, `LayerGradientShap`, `DeepLiftShap`, `NeuronDeepLiftShap`, `LayerDeepLiftShap`: [A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)
* `InternalInfluence`: [Influence-Directed Explanations for Deep Convolutional Networks, Klas Leino et al. 2018](https://arxiv.org/pdf/1802.03788.pdf)
* `InternalInfluence`: [Influence-Directed Explanations for Deep Convolutional Networks, Klas Leino et al. 2018](https://arxiv.org/abs/1802.03788)
* `Saliency`, `NeuronGradient`: [Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014](https://arxiv.org/pdf/1312.6034.pdf)
* `GradCAM`, `Guided GradCAM`: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017](https://arxiv.org/abs/1610.02391.pdf)
* `Deconvolution`, `Neuron Deconvolution`: [Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014](https://arxiv.org/pdf/1311.2901.pdf)
* `Guided Backpropagation`, `Neuron Guided Backpropagation`: [Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015](https://arxiv.org/pdf/1412.6806.pdf)
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014](https://arxiv.org/abs/1312.6034)
* `GradCAM`, `Guided GradCAM`: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017](https://arxiv.org/abs/1610.02391)
* `Deconvolution`, `Neuron Deconvolution`: [Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014](https://arxiv.org/abs/1311.2901)
* `Guided Backpropagation`, `Neuron Guided Backpropagation`: [Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015](https://arxiv.org/abs/1412.6806)
* `Feature Permutation`: [Permutation Feature Importance](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
* `Occlusion`: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/abs/1311.2901)
* `Shapley Value`: [A value for n-person games. Contributions to the Theory of Games 2.28 (1953): 307-317](https://apps.dtic.mil/dtic/tr/fulltext/u2/604084.pdf)
* `Shapley Value Sampling`: [Polynomial calculation of the Shapley value based on sampling](https://www.sciencedirect.com/science/article/pii/S0305054808000804)
* `Infidelity and Sensitivity`: [On the (In)fidelity and Sensitivity for Explanations](https://arxiv.org/abs/1901.09392)

More details about the above mentioned [algorithms](https://captum.ai/docs/algorithms) and their pros and cons can be found on our [web-site](https://captum.ai/docs/algorithms_comparison_matrix).
More details about the above mentioned [attribution algorithms](https://captum.ai/docs/attribution_algorithms) and their pros and cons can be found on our [web-site](https://captum.ai/docs/algorithms_comparison_matrix).

## License
Captum is BSD licensed, as found in the [LICENSE](LICENSE) file.
22 changes: 12 additions & 10 deletions captum/_utils/av.py
Expand Up @@ -80,7 +80,7 @@ def __getitem__(self, idx: int) -> Union[Tensor, Tuple[Tensor, ...]]:
av = torch.load(fl)
return av

def __len__(self):
def __len__(self) -> int:
return len(self.files)

AV_DIR_NAME: str = "av"
Expand Down Expand Up @@ -211,9 +211,9 @@ def save(
AV.generate_dataset_activations from batch index.
It assumes identifier is same for all layers if a list of
`layers` is provided.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
act_tensors (Tensor or List of Tensor): A batch of activation vectors.
act_tensors (tensor or list of tensor): A batch of activation vectors.
This must match the dimension of `layers`.
num_id (str): string representing the batch number for which the activation
vectors are computed
Expand Down Expand Up @@ -299,13 +299,15 @@ def _manage_loading_layers(
for the `layer` are stored.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
load_from_disk (bool, optional): Whether or not to load from disk.
Default: True
identifier (str or None): An optional identifier for the layer
activations. Can be used to distinguish between activations for
different training batches.
num_id (str): An optional string representing the batch number for which the
activation vectors are computed
num_id (str, optional): An optional string representing the batch number
for which the activation vectors are computed.

Returns:
List of layer names for which activations should be generated
Expand Down Expand Up @@ -357,9 +359,9 @@ def _compute_and_save_activations(
define all of its layers as attributes of the model.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
inputs (tensor or tuple of tensors): Batch of examples for
inputs (Tensor or tuple of Tensor): Batch of examples for
which influential instances are computed. They are passed to the
input `model`. The first dimension in `inputs` tensor or tuple of
tensors corresponds to the batch size.
Expand All @@ -368,7 +370,7 @@ def _compute_and_save_activations(
different training batches.
num_id (str): An required string representing the batch number for which the
activation vectors are computed
additional_forward_args (optional): Additional arguments that will be
additional_forward_args (Any, optional): Additional arguments that will be
passed to `model` after inputs.
Default: None
load_from_disk (bool): Forces function to regenerate activations if False.
Expand Down Expand Up @@ -433,7 +435,7 @@ def generate_dataset_activations(
define all of its layers as attributes of the model.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
dataloader (torch.utils.data.DataLoader): DataLoader that yields Dataset
for which influential instances are computed. They are passed to
Expand Down
4 changes: 2 additions & 2 deletions captum/_utils/gradient.py
Expand Up @@ -730,7 +730,7 @@ def _compute_jacobian_wrt_params(
but must behave as a library loss function would if `reduction='none'`.

Returns:
grads (Tuple of Tensor): Returns the Jacobian for the minibatch as a
grads (tuple of Tensor): Returns the Jacobian for the minibatch as a
tuple of gradients corresponding to the tuple of trainable parameters
returned by `model.parameters()`. Each object grads[i] references to the
gradients for the parameters in the i-th trainable layer of the model.
Expand Down Expand Up @@ -804,7 +804,7 @@ def _compute_jacobian_wrt_params_with_sample_wise_trick(
Defaults to 'sum'.

Returns:
grads (Tuple of Tensor): Returns the Jacobian for the minibatch as a
grads (tuple of Tensor): Returns the Jacobian for the minibatch as a
tuple of gradients corresponding to the tuple of trainable parameters
returned by `model.parameters()`. Each object grads[i] references to the
gradients for the parameters in the i-th trainable layer of the model.
Expand Down
8 changes: 4 additions & 4 deletions captum/_utils/models/linear_model/model.py
Expand Up @@ -20,7 +20,7 @@ def __init__(self, train_fn: Callable, **kwargs) -> None:
Please note that this is an experimental feature.

Args:
train_fn (callable)
train_fn (Callable)
The function to train with. See
`captum._utils.models.linear_model.train.sgd_train_linear_model`
and
Expand Down Expand Up @@ -65,14 +65,14 @@ def _construct_model_params(
normalization parameters used.
bias (bool):
Whether to add a bias term. Not needed if normalized input.
weight_values (tensor, optional):
weight_values (Tensor, optional):
The values to initialize the linear model with. This must be a
1D or 2D tensor, and of the form `(num_outputs, num_features)` or
`(num_features,)`. Additionally, if this is provided you need not
to provide `in_features` or `out_features`.
bias_value (tensor, optional):
bias_value (Tensor, optional):
The bias value to initialize the model with.
classes (tensor, optional):
classes (Tensor, optional):
The list of prediction classes supported by the model in case it
performs classificaton. In case of regression it is set to None.
Default: None
Expand Down
30 changes: 16 additions & 14 deletions captum/attr/_core/deep_lift.py
Expand Up @@ -112,7 +112,7 @@ def __init__(
r"""
Args:

model (nn.Module): The reference to PyTorch model instance. Model cannot
model (nn.Module): The reference to PyTorch model instance. Model cannot
contain any in-place nonlinear submodules; these are not
supported by the register_full_backward_hook PyTorch API
starting from PyTorch v1.9.
Expand Down Expand Up @@ -185,7 +185,7 @@ def attribute( # type: ignore
r"""
Args:

inputs (tensor or tuple of tensors): Input for which
inputs (Tensor or tuple of Tensor): Input for which
attributions are computed. If forward_func takes a single
tensor as input, a single input tensor should be provided.
If forward_func takes multiple tensors as input, a tuple
Expand All @@ -194,7 +194,7 @@ def attribute( # type: ignore
to the number of examples (aka batch size), and if
multiple input tensors are provided, the examples must
be aligned appropriately.
baselines (scalar, tensor, tuple of scalars or tensors, optional):
baselines (scalar, Tensor, tuple of scalar, or Tensor, optional):
Baselines define reference samples that are compared with
the inputs. In order to assign attribution scores DeepLift
computes the differences between the inputs/outputs and
Expand Down Expand Up @@ -226,7 +226,7 @@ def attribute( # type: ignore
use zero scalar corresponding to each input tensor.

Default: None
target (int, tuple, tensor or list, optional): Output indices for
target (int, tuple, Tensor, or list, optional): Output indices for
which gradients are computed (for classification cases,
this is usually the target class).
If the network returns a scalar value per example,
Expand All @@ -251,7 +251,7 @@ def attribute( # type: ignore
target for the corresponding example.

Default: None
additional_forward_args (any, optional): If the forward function
additional_forward_args (Any, optional): If the forward function
requires additional arguments other than the inputs for
which attributions should not be computed, this argument
can be provided. It must be either a single additional
Expand All @@ -267,7 +267,7 @@ def attribute( # type: ignore
is set to True convergence delta will be returned in
a tuple following attributions.
Default: False
custom_attribution_func (callable, optional): A custom function for
custom_attribution_func (Callable, optional): A custom function for
computing final attribution scores. This function can take
at least one and at most three arguments with the
following signature:
Expand Down Expand Up @@ -303,7 +303,7 @@ def attribute( # type: ignore
based on DeepLift's rescale rule.
Delta is calculated per example, meaning that the number of
elements in returned delta tensor is equal to the number of
of examples in input.
examples in input.
Note that the logic described for deltas is guaranteed when the
default logic for attribution computations is used, meaning that the
`custom_attribution_func=None`, otherwise it is not guaranteed and
Expand Down Expand Up @@ -611,12 +611,14 @@ class DeepLiftShap(DeepLift):
each baseline and averages resulting attributions.
More details about the algorithm can be found here:

http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf

Note that the explanation model:

1. Assumes that input features are independent of one another
2. Is linear, meaning that the explanations are modeled through
the additive composition of feature effects.

Although, it assumes a linear model for each explanation, the overall
model across multiple explanations can be complex and non-linear.
"""
Expand All @@ -625,7 +627,7 @@ def __init__(self, model: Module, multiply_by_inputs: bool = True) -> None:
r"""
Args:

model (nn.Module): The reference to PyTorch model instance. Model cannot
model (nn.Module): The reference to PyTorch model instance. Model cannot
contain any in-place nonlinear submodules; these are not
supported by the register_full_backward_hook PyTorch API.
multiply_by_inputs (bool, optional): Indicates whether to factor
Expand Down Expand Up @@ -694,7 +696,7 @@ def attribute( # type: ignore
r"""
Args:

inputs (tensor or tuple of tensors): Input for which
inputs (Tensor or tuple of Tensor): Input for which
attributions are computed. If forward_func takes a single
tensor as input, a single input tensor should be provided.
If forward_func takes multiple tensors as input, a tuple
Expand All @@ -703,7 +705,7 @@ def attribute( # type: ignore
to the number of examples (aka batch size), and if
multiple input tensors are provided, the examples must
be aligned appropriately.
baselines (tensor, tuple of tensors, callable):
baselines (Tensor, tuple of Tensor, or Callable):
Baselines define reference samples that are compared with
the inputs. In order to assign attribution scores DeepLift
computes the differences between the inputs/outputs and
Expand All @@ -728,7 +730,7 @@ def attribute( # type: ignore

It is recommended that the number of samples in the baselines'
tensors is larger than one.
target (int, tuple, tensor or list, optional): Output indices for
target (int, tuple, Tensor, or list, optional): Output indices for
which gradients are computed (for classification cases,
this is usually the target class).
If the network returns a scalar value per example,
Expand All @@ -753,7 +755,7 @@ def attribute( # type: ignore
target for the corresponding example.

Default: None
additional_forward_args (any, optional): If the forward function
additional_forward_args (Any, optional): If the forward function
requires additional arguments other than the inputs for
which attributions should not be computed, this argument
can be provided. It must be either a single additional
Expand All @@ -769,7 +771,7 @@ def attribute( # type: ignore
is set to True convergence delta will be returned in
a tuple following attributions.
Default: False
custom_attribution_func (callable, optional): A custom function for
custom_attribution_func (Callable, optional): A custom function for
computing final attribution scores. This function can take
at least one and at most three arguments with the
following signature:
Expand Down