Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added The BInary Expected_Calibration_Error (ECE) Metric #3132

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

Zekrom-7780
Copy link

Fixes #1009

Description:

Added the binary ECE(Expected_Calibration_Error) metric.

Reviewers

cc. @vfdev-5

P.S.

  • Do I have to add new tests, and documentation for the new metric? I don't know whether I had to add them or not.
  • Will be doing the MultiClassECE in a new pull request, since it feels that it is the better thing to do.

@github-actions github-actions bot added the module: contrib Contrib module label Nov 11, 2023
Copy link
Contributor

sweep-ai bot commented Nov 11, 2023

Apply Sweep Rules to your PR?

  • Apply: All new business logic should have corresponding unit tests.
  • Apply: Refactor large functions to be more modular.

Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @Zekrom-7780 !

I left few comments but I haven't yet fully checked the code and correctness.

Do I have to add new tests, and documentation for the new metric? I don't know whether I had to add them or not.

Yes, we need to add new tests and provide a docstring and include the metric to the docs (i'll guide how to do that once we are at this point, nothing complicated).

Main question about tests will be what would be the reference for our implementation. For example, in most of the cases, we are using sklearn to compute expected value.

Will be doing the MultiClassECE in a new pull request, since it feels that it is the better thing to do.

Won't be possible to have a single class for binary and multi-class ECE ?

@@ -0,0 +1,57 @@
import torch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, let's put it into ignite/metrics/ExpectedCalibrationError.py instead of ignite/contrib/metrics/ExpectedCalibrationError.py

def update(self, output):
y_pred, y = output

assert y_pred.dim() == 2 and y_pred.shape[1] == 2, "This metric is for binary classification."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the following way to raise errors instead of assert:

if not (y_pred.dim() == 2 and y_pred.shape[1] == 2):
    raise ValueError("This metric is for binary classification")

To assert if the input is binary we were doing previously something like here:

def _check_binary_multilabel_cases(self, output: Sequence[torch.Tensor]) -> None:
y_pred, y = output
if not torch.equal(y, y**2):
raise ValueError("For binary cases, y must be comprised of 0's and 1's.")
if not torch.equal(y_pred, y_pred**2):
raise ValueError("For binary cases, y_pred must be comprised of 0's and 1's.")
def _check_type(self, output: Sequence[torch.Tensor]) -> None:
y_pred, y = output
if y.ndimension() + 1 == y_pred.ndimension():
num_classes = y_pred.shape[1]
if num_classes == 1:
update_type = "binary"
self._check_binary_multilabel_cases((y_pred, y))

self.corrects = torch.tensor([], device=self.device)

def update(self, output):
y_pred, y = output
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually call .detach on both to stop grad computation like here:

y_pred, y = output[0].detach(), output[1].detach()

@Zekrom-7780
Copy link
Author

@vfdev-5 made the changes,

Ready for comments

Won't be possible to have a single class for binary and multi-class ECE ?

Will do it in the next commits if you want, actually I just wanted to understand how does the binaryclassECE work, and by completing that, the other one would be a piece of cake to do, hence just focusing on this one rn

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Nov 22, 2023

@Zekrom-7780 any updates on this PR ?

@Zekrom-7780
Copy link
Author

@vfdev-5 I'll create the Multi-Class-ECE in the next 3-4 hours, and push the changes

Then i'll move to writing the unit-tests, where I'll need your help, since I haven't written any as such

@Zekrom-7780
Copy link
Author

@vfdev-5 , made the changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: contrib Contrib module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Expected Calibration Error metrics and derived
2 participants