New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add ReducedRankRegression estimator (Resolves#10796) #28779
base: main
Are you sure you want to change the base?
Conversation
❌ Linting issuesThis PR is introducing linting issues. Here's a summary of the issues. Note that you can avoid having linting issues by enabling You can see the details of the linting issues under the
|
I've added a ReducedRankRegression estimator (resolves #10796) . It seems to be behaving as expected, as shown below.
I did this by extending
sklearn.linearmodel.Ridge
because I thought it'd be nice to optionally apply a ridge penalty too.Where I'm stuck is on how to handle some tests: I'm currently failing a few of them because reduced rank regression (specifically the underlying SVD operation) don't work (and are pointless) when there is only one target to predict. Any guidance on what to do here is appreciated.
Also if inheriting from an existing estimator is inappropriate, I can implement more of the functionality myself. This is my first contribution attempt so still getting familiar with the codebase...