Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TST Relax test_gradient_boosting_early_stopping #24541

Conversation

jjerphan
Copy link
Member

@jjerphan jjerphan commented Sep 29, 2022

Reference Issues/PRs

Fixes a failure observed in #24446 (comment)

What does this implement/fix? Explain your changes.

Depending on platforms, the number of fitted estimators might slightly vary.
This makes test_gradient_boosting_early_stopping fails as it checks it against hard-coded expected values.

This PR proposes relaxing test_gradient_boosting_early_stopping by checking for an inclusion in an interval centered in the expected number of fitted estimators rather than a strict equality.

Any other comments?

Other (better) proposals are welcome!

(gbc, 1e-1, 28),
(gbr, 1e-1, 13),
(gbc, 1e-3, 70),
(gbr, 1e-3, 28),
):
est.set_params(tol=tol)
est.fit(X_train, y_train)
assert est.n_estimators_ == early_stop_n_estimators
assert (
expected_early_stop_n_estimators - delta_early_stop_n_estimators
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be enough to have an upper bound with a less-than-operator.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I don't get here is why Python 3.11 would trigger this issue while other Windows builds are working.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just follow the description of the test and simply check the following ?
gbc(tol=1e-1).fit(X, y).n_estimators_ < gbc(tol=1e-3).fit(X, y).n_estimators_

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in c47bbbd.

Co-authored-by: Jérémie du Boisberranger <jeremiedbb@users.noreply.github.com>
@jjerphan jjerphan marked this pull request as ready for review September 30, 2022 09:16
@EwoutH
Copy link

EwoutH commented Oct 5, 2022

Thanks for this effort! Since this is blocking #24446, what's needed to get it merged?

@jjerphan
Copy link
Member Author

jjerphan commented Oct 5, 2022

Just another maintainer's approval.

cc @jeremiedbb: does this LGTY?

Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

sklearn/ensemble/tests/test_gradient_boosting.py Outdated Show resolved Hide resolved
@jeremiedbb jeremiedbb merged commit 06c36bb into scikit-learn:main Oct 7, 2022
@jjerphan jjerphan deleted the tst/relax-test_gradient_boosting_early_stopping branch October 7, 2022 10:15
jjerphan added a commit to cmarmo/scikit-learn that referenced this pull request Oct 7, 2022
thomasjpfan pushed a commit to thomasjpfan/scikit-learn that referenced this pull request Oct 9, 2022
@glemaitre glemaitre added the To backport PR merged in master that need a backport to a release branch defined based on the milestone. label Oct 26, 2022
@glemaitre glemaitre added this to the 1.1.3 milestone Oct 26, 2022
glemaitre pushed a commit to glemaitre/scikit-learn that referenced this pull request Oct 26, 2022
glemaitre pushed a commit to glemaitre/scikit-learn that referenced this pull request Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module:ensemble No Changelog Needed To backport PR merged in master that need a backport to a release branch defined based on the milestone.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants