Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check_regressors_train failure on master with the latest release on conda #14106

Closed
glemaitre opened this issue Jun 17, 2019 · 10 comments
Closed

Comments

@glemaitre
Copy link
Member

We spot a failure of check_regressors_train (with X being an array or a memmap) on master.
The failure happened for:

  • OMP
  • Ransac

We should investigate this issue.

@thomasjpfan
Copy link
Member

OrthogonalMatchingPursuit is also failing sometimes.

@adrinjalali Should we try to resolve this before the sprint?

@adrinjalali
Copy link
Member

It'd be nice, since it may confuse people there.

@adrinjalali
Copy link
Member

@glemaitre I can't really reproduce this, could you please give us maybe a reproducing code?

@glemaitre
Copy link
Member Author

It is weird. It does not happen anymore, maybe an Heisenbug.

For instance, for this build:
https://github.com/scikit-learn/scikit-learn/runs/149613078

@jnothman
Copy link
Member

It does not happen anymore, maybe an Heisenbug.

We should rather check the precise versions of packages being loaded from conda during and after the failure, to identify if some combination of versions results in the issue.

@glemaitre
Copy link
Member Author

There is no diff in the versions used in the erroring build and the current build.

@glemaitre
Copy link
Member Author

Uhm I checked the common test and I think that Ransac was tagged as poor_score estimator avoiding this test. It seems that we did not add the tag. We should probably do that. On second thought, I find weird that this is not failing all the time because I would expect the estimator to be deterministic when fixing the random state. Testing locally this test return always the same score which would suggest that. This is a bit mysterious to me why this is not a consistent failure.

@glemaitre glemaitre reopened this Jul 12, 2019
@glemaitre
Copy link
Member Author

Reopening since this is still happening time to time

@thomasjpfan
Copy link
Member

Closing since this was fixed in #21781 and all related PRs. Also the memmap issue was fixed on the joblib side here: joblib/joblib#1254

@ogrisel
Copy link
Member

ogrisel commented Mar 1, 2022

Note that joblib has not be released yet at the time of writing but it should happen soon-ish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants