-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: stats.gaussian_kde: replace use of inv_cov in logpdf #16987
Conversation
Failures appear to be unrelated. |
Oh right, that makes perfect sense. If you start with the cholesky decomposition of the covariance matrix then you end up with something that isn’t quite a cholesky decomposition of the precision matrix because the left factor is upper triangular instead of lower. This factorization is still workable though because you can just call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
This LGTM from a Cython perspective -- I haven't looked at the maths, yet. Doing another PR for improving this implementation w.r.t Cython technicalities is sensible to me. 👍 Let me know if you need another review for this PR generally. |
Reference issue
gh-16692
What does this implement/fix?
gh-16692 used Cholesky decomposition to avoid the inversion of the covariance matrix in
gaussian_kde.pdf
.We wanted to wait for gh-15493 to finish before implementing that change in
gaussian_kde.logpdf
.Now that gh-15493 is done, this makes the change to
gaussian_kde.logpdf
.It also simplifies what we did in gh-16692. In that, we found a way to use Cholesky decomposition to compute$L^T x$ , where $L L^T = \Sigma^{-1}$ (that is, $L$ is a Cholesky factor of the precision matrix / inverse covariance matrix).$L^T x$ ; it was just one way to get to $x^T \Sigma^{-1} x$ . There is a simpler way to get that while still avoiding the matrix inversion: $x^T \Sigma^{-1} x = y^T y$ , where $C y = x$ and $CC^T = \Sigma$ (that is, $C$ is a Cholesky factor of the original covariance matrix rather than the precision matrix). The short calculation is shown here.
Ultimately, we don't need
@steppi I think someone else can review this since it's simpler than gh-16692, but I thought you might find it interesting that those permutations weren't necessary to get the end result.