New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QZ-decomposition #1067
QZ-decomposition #1067
Changes from 7 commits
7230ae1
6f7ef38
769f20c
b2c6c6b
6a28306
7e9345d
7d8fb3d
748848f
ae35d1c
162bb32
d88337e
55fdd84
4362f00
4038e66
d5069f3
a4de6a8
497a4d8
fb0cb51
b7fe6b9
91b7e05
7510d48
e52f117
5e10ca4
c8a920f
3413ab7
4413a35
adf50a6
2743eef
80a844a
bc31012
ff2d431
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,7 +13,7 @@ use na::{DefaultAllocator, Matrix, OMatrix, OVector, Scalar}; | |
|
||
use lapack; | ||
|
||
/// Generalized eigenvalues and generalized eigenvectors(left and right) of a pair of N*N square matrices. | ||
/// Generalized eigenvalues and generalized eigenvectors (left and right) of a pair of N*N square matrices. | ||
/// | ||
/// Each generalized eigenvalue (lambda) satisfies determinant(A - lambda*B) = 0 | ||
/// | ||
|
@@ -40,12 +40,12 @@ use lapack; | |
feature = "serde-serialize", | ||
serde( | ||
bound(deserialize = "DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>, | ||
OVector<T, D>: Serialize, | ||
OVector<T, D>: Deserialize<'de>, | ||
OMatrix<T, D, D>: Deserialize<'de>") | ||
) | ||
)] | ||
#[derive(Clone, Debug)] | ||
pub struct GE<T: Scalar, D: Dim> | ||
pub struct GeneralizedEigen<T: Scalar, D: Dim> | ||
where | ||
DefaultAllocator: Allocator<T, D> + Allocator<T, D, D>, | ||
{ | ||
|
@@ -56,15 +56,15 @@ where | |
vsr: OMatrix<T, D, D>, | ||
} | ||
|
||
impl<T: Scalar + Copy, D: Dim> Copy for GE<T, D> | ||
impl<T: Scalar + Copy, D: Dim> Copy for GeneralizedEigen<T, D> | ||
where | ||
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>, | ||
OMatrix<T, D, D>: Copy, | ||
OVector<T, D>: Copy, | ||
{ | ||
} | ||
|
||
impl<T: GEScalar + RealField + Copy, D: Dim> GE<T, D> | ||
impl<T: GeneralizedEigenScalar + RealField + Copy, D: Dim> GeneralizedEigen<T, D> | ||
where | ||
DefaultAllocator: Allocator<T, D, D> + Allocator<T, D>, | ||
{ | ||
|
@@ -170,7 +170,7 @@ where | |
); | ||
lapack_check!(info); | ||
|
||
Some(GE { | ||
Some(GeneralizedEigen { | ||
alphar, | ||
alphai, | ||
beta, | ||
|
@@ -220,12 +220,13 @@ where | |
|
||
let mut c = 0; | ||
|
||
let epsilon = T::RealField::default_epsilon(); | ||
|
||
while c < n { | ||
if eigenvalues[c].im.abs() > T::RealField::default_epsilon() && c + 1 < n && { | ||
if eigenvalues[c].im.abs() > epsilon && c + 1 < n && { | ||
let e_conj = eigenvalues[c].conj(); | ||
let e = eigenvalues[c + 1]; | ||
((e_conj.re - e.re).abs() < T::RealField::default_epsilon()) | ||
&& ((e_conj.im - e.im).abs() < T::RealField::default_epsilon()) | ||
(&e_conj.re).ulps_eq(&e.re, epsilon, 6) && (&e_conj.im).ulps_eq(&e.im, epsilon, 6) | ||
} { | ||
// taking care of the left eigenvector matrix | ||
l.column_mut(c).zip_apply(&self.vsl.column(c + 1), |r, i| { | ||
|
@@ -255,7 +256,7 @@ where | |
/// computes the generalized eigenvalues i.e values of lambda that satisfy the following equation | ||
/// determinant(A - lambda* B) = 0 | ||
#[must_use] | ||
fn eigenvalues(&self) -> OVector<Complex<T>, D> | ||
pub fn eigenvalues(&self) -> OVector<Complex<T>, D> | ||
where | ||
DefaultAllocator: Allocator<Complex<T>, D>, | ||
{ | ||
|
@@ -265,23 +266,8 @@ where | |
out[i] = if self.beta[i].clone().abs() < T::RealField::default_epsilon() { | ||
Complex::zero() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment on check applies here |
||
} else { | ||
let mut cr = self.alphar[i].clone(); | ||
let mut ci = self.alphai[i].clone(); | ||
let b = self.beta[i].clone(); | ||
|
||
if cr.clone().abs() < T::RealField::default_epsilon() { | ||
cr = T::RealField::zero() | ||
} else { | ||
cr = cr / b.clone() | ||
}; | ||
|
||
if ci.clone().abs() < T::RealField::default_epsilon() { | ||
ci = T::RealField::zero() | ||
} else { | ||
ci = ci / b | ||
}; | ||
|
||
Complex::new(cr, ci) | ||
Complex::new(self.alphar[i].clone(), self.alphai[i].clone()) | ||
* (Complex::new(self.beta[i].clone(), T::RealField::zero()).inv()) | ||
} | ||
} | ||
|
||
|
@@ -314,8 +300,8 @@ where | |
* Lapack functions dispatch. | ||
* | ||
*/ | ||
/// Trait implemented by scalars for which Lapack implements the RealField GE decomposition. | ||
pub trait GEScalar: Scalar { | ||
/// Trait implemented by scalars for which Lapack implements the RealField GeneralizedEigen decomposition. | ||
pub trait GeneralizedEigenScalar: Scalar { | ||
#[allow(missing_docs)] | ||
fn xggev( | ||
jobvsl: u8, | ||
|
@@ -357,9 +343,9 @@ pub trait GEScalar: Scalar { | |
) -> i32; | ||
} | ||
|
||
macro_rules! real_eigensystem_scalar_impl ( | ||
macro_rules! generalized_eigen_scalar_impl ( | ||
($N: ty, $xggev: path) => ( | ||
impl GEScalar for $N { | ||
impl GeneralizedEigenScalar for $N { | ||
#[inline] | ||
fn xggev(jobvsl: u8, | ||
jobvsr: u8, | ||
|
@@ -409,5 +395,5 @@ macro_rules! real_eigensystem_scalar_impl ( | |
) | ||
); | ||
|
||
real_eigensystem_scalar_impl!(f32, lapack::sggev); | ||
real_eigensystem_scalar_impl!(f64, lapack::dggev); | ||
generalized_eigen_scalar_impl!(f32, lapack::sggev); | ||
generalized_eigen_scalar_impl!(f64, lapack::dggev); |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,74 +1,72 @@ | ||
use na::dimension::{Const, Dynamic}; | ||
use na::{DMatrix, EuclideanNorm, Norm, OMatrix}; | ||
use nl::GE; | ||
use na::dimension::Const; | ||
use na::{DMatrix, OMatrix}; | ||
use nl::GeneralizedEigen; | ||
use num_complex::Complex; | ||
use simba::scalar::ComplexField; | ||
use std::cmp; | ||
|
||
use crate::proptest::*; | ||
use proptest::{prop_assert, proptest}; | ||
use proptest::{prop_assert, prop_compose, proptest}; | ||
|
||
prop_compose! { | ||
fn f64_dynamic_dim_squares() | ||
(n in PROPTEST_MATRIX_DIM) | ||
(a in matrix(PROPTEST_F64,n,n), b in matrix(PROPTEST_F64,n,n)) -> (DMatrix<f64>, DMatrix<f64>){ | ||
(a,b) | ||
}} | ||
|
||
proptest! { | ||
#[test] | ||
fn ge(n in PROPTEST_MATRIX_DIM) { | ||
let n = cmp::max(1, cmp::min(n, 10)); | ||
let a = DMatrix::<f64>::new_random(n, n); | ||
let b = DMatrix::<f64>::new_random(n, n); | ||
let a_condition_no = a.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&a))); | ||
let b_condition_no = b.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&b))); | ||
fn ge((a,b) in f64_dynamic_dim_squares()){ | ||
|
||
let a_c = a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
let n = a.shape_generic().0; | ||
|
||
if a_condition_no.unwrap_or(200000.0) < 5.0 && b_condition_no.unwrap_or(200000.0) < 5.0 { | ||
let a_c = a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
let ge = GeneralizedEigen::new(a.clone(), b.clone()); | ||
let (vsl,vsr) = ge.clone().eigenvectors(); | ||
|
||
let ge = GE::new(a.clone(), b.clone()); | ||
let (vsl,vsr) = ge.clone().eigenvectors(); | ||
|
||
for (i,(alpha,beta)) in ge.raw_eigenvalues().iter().enumerate() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
for (i,(alpha,beta)) in ge.raw_eigenvalues().iter().enumerate() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
|
||
prop_assert!( | ||
relative_eq!( | ||
((&l_a - &l_b)*vsr.column(i)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Dynamic::new(n), Const::<1>), | ||
epsilon = 1.0e-7)); | ||
prop_assert!( | ||
relative_eq!( | ||
((&l_a - &l_b)*vsr.column(i)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(n, Const::<1>), | ||
epsilon = 1.0e-5)); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Relative comparisons tend to break down near zero - can't this make this test relatively brittle? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see what you mean. I believe there is the absdiff catch as shown here https://docs.rs/approx/0.4.0/src/approx/relative_eq.rs.html#68 to tackle the concern. Hope I'm not preaching to the choir. If I am, apologies There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't know that It is a little unfortunate then that With all that said, it might be fine in this case, if you know that the "scale" (for example, the largest entry) of your numbers are expected to be somewhat close to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The eigenvectors seem to be guaranteed by LAPACK in their documentation
So I think scale wise we should be good |
||
|
||
prop_assert!( | ||
relative_eq!( | ||
(vsl.column(i).adjoint()*(&l_a - &l_b)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<1>, Dynamic::new(n)), | ||
epsilon = 1.0e-7)) | ||
}; | ||
prop_assert!( | ||
relative_eq!( | ||
(vsl.column(i).adjoint()*(&l_a - &l_b)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<1>, n), | ||
epsilon = 1.0e-5)) | ||
}; | ||
} | ||
|
||
#[test] | ||
fn ge_static(a in matrix4(), b in matrix4()) { | ||
let a_condition_no = a.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&a))); | ||
let b_condition_no = b.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&b))); | ||
|
||
if a_condition_no.unwrap_or(200000.0) < 5.0 && b_condition_no.unwrap_or(200000.0) < 5.0 { | ||
let ge = GE::new(a.clone(), b.clone()); | ||
let a_c =a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
let (vsl,vsr) = ge.eigenvectors(); | ||
let eigenvalues = ge.raw_eigenvalues(); | ||
let ge = GeneralizedEigen::new(a.clone(), b.clone()); | ||
let a_c =a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
let (vsl,vsr) = ge.eigenvectors(); | ||
let eigenvalues = ge.raw_eigenvalues(); | ||
|
||
for (i,(alpha,beta)) in eigenvalues.iter().enumerate() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
for (i,(alpha,beta)) in eigenvalues.iter().enumerate() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
|
||
prop_assert!( | ||
relative_eq!( | ||
((&l_a - &l_b)*vsr.column(i)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<4>, Const::<1>), | ||
epsilon = 1.0e-7)); | ||
prop_assert!( | ||
relative_eq!((vsl.column(i).adjoint()*(&l_a - &l_b)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<1>, Const::<4>), | ||
epsilon = 1.0e-7)) | ||
} | ||
}; | ||
prop_assert!( | ||
relative_eq!( | ||
((&l_a - &l_b)*vsr.column(i)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<4>, Const::<1>), | ||
epsilon = 1.0e-5)); | ||
prop_assert!( | ||
relative_eq!((vsl.column(i).adjoint()*(&l_a - &l_b)).map(|x| x.modulus()), | ||
OMatrix::zeros_generic(Const::<1>, Const::<4>), | ||
epsilon = 1.0e-5)) | ||
} | ||
} | ||
|
||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,74 +1,34 @@ | ||
use na::{DMatrix, EuclideanNorm, Norm}; | ||
use na::DMatrix; | ||
use nl::QZ; | ||
use num_complex::Complex; | ||
use simba::scalar::ComplexField; | ||
use std::cmp; | ||
|
||
use crate::proptest::*; | ||
use proptest::{prop_assert, proptest}; | ||
use proptest::{prop_assert, prop_compose, proptest}; | ||
|
||
prop_compose! { | ||
fn f64_dynamic_dim_squares() | ||
(n in PROPTEST_MATRIX_DIM) | ||
(a in matrix(PROPTEST_F64,n,n), b in matrix(PROPTEST_F64,n,n)) -> (DMatrix<f64>, DMatrix<f64>){ | ||
(a,b) | ||
}} | ||
|
||
proptest! { | ||
#[test] | ||
fn qz(n in PROPTEST_MATRIX_DIM) { | ||
let n = cmp::max(1, cmp::min(n, 10)); | ||
let a = DMatrix::<f64>::new_random(n, n); | ||
let b = DMatrix::<f64>::new_random(n, n); | ||
fn qz((a,b) in f64_dynamic_dim_squares()) { | ||
|
||
let qz = QZ::new(a.clone(), b.clone()); | ||
let qz = QZ::new(a.clone(), b.clone()); | ||
let (vsl,s,t,vsr) = qz.clone().unpack(); | ||
let eigenvalues = qz.raw_eigenvalues(); | ||
|
||
prop_assert!(relative_eq!(&vsl * s * vsr.transpose(), a.clone(), epsilon = 1.0e-7)); | ||
prop_assert!(relative_eq!(vsl * t * vsr.transpose(), b.clone(), epsilon = 1.0e-7)); | ||
|
||
let a_condition_no = a.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&a))); | ||
let b_condition_no = b.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&b))); | ||
|
||
if a_condition_no.unwrap_or(200000.0) < 5.0 && b_condition_no.unwrap_or(200000.0) < 5.0 { | ||
let a_c = a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
|
||
|
||
for (alpha,beta) in eigenvalues.iter() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
|
||
prop_assert!( | ||
relative_eq!( | ||
(&l_a - &l_b).determinant().modulus(), | ||
0.0, | ||
epsilon = 1.0e-7)); | ||
prop_assert!(relative_eq!(&vsl * s * vsr.transpose(), a, epsilon = 1.0e-7)); | ||
prop_assert!(relative_eq!(vsl * t * vsr.transpose(), b, epsilon = 1.0e-7)); | ||
|
||
}; | ||
}; | ||
} | ||
|
||
#[test] | ||
fn qz_static(a in matrix4(), b in matrix4()) { | ||
let qz = QZ::new(a.clone(), b.clone()); | ||
let (vsl,s,t,vsr) = qz.unpack(); | ||
let eigenvalues = qz.raw_eigenvalues(); | ||
|
||
prop_assert!(relative_eq!(&vsl * s * vsr.transpose(), a, epsilon = 1.0e-7)); | ||
prop_assert!(relative_eq!(vsl * t * vsr.transpose(), b, epsilon = 1.0e-7)); | ||
|
||
let a_condition_no = a.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&a))); | ||
let b_condition_no = b.clone().try_inverse().and_then(|x| Some(EuclideanNorm.norm(&x)* EuclideanNorm.norm(&b))); | ||
|
||
if a_condition_no.unwrap_or(200000.0) < 5.0 && b_condition_no.unwrap_or(200000.0) < 5.0 { | ||
let a_c =a.clone().map(|x| Complex::new(x, 0.0)); | ||
let b_c = b.clone().map(|x| Complex::new(x, 0.0)); | ||
|
||
for (alpha,beta) in eigenvalues.iter() { | ||
let l_a = a_c.clone() * Complex::new(*beta, 0.0); | ||
let l_b = b_c.clone() * *alpha; | ||
|
||
prop_assert!( | ||
relative_eq!( | ||
(&l_a - &l_b).determinant().modulus(), | ||
0.0, | ||
epsilon = 1.0e-7)); | ||
} | ||
}; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed this the first time around, but at first glance this logic does not sound to me. What does it do? I think this needs a solid comment and/or documentation.
For example, imagine scaling a matrix by
1e20
, then all its eigenvalues get scaled by1e20
and this logic will behave entirely differently. It seems to me at the very least that we'd need to look relative to the maximum absolute eigenvalue, but even then it's not bullet proof.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scaling point is something I've never considered, it's a pretty solid point.
But in this scenario, LAPACK uses the eigenvalues to signal how to assemble eigenvectors. If the eigenvalue is complex and if the subsequent eigenvalue is the former's complex conjugate, the eigenvector to be assembled has it's real and imaginary parts in two subsequent columns.
Does the scaling edge case apply here seeing that it's more of an assembling problem by signalling than a numeric one? In the documentation, the way to check for realness is by looking at a_j values, this gets mapped to the imaginary value of an eigenvalue. While it's interesting for me to look deeper in general and perhaps into LAPACK code, was wondering if putting the burden of getting right for the edge case is more on LAPACK to get it right?
In either case, putting in a comment here does make senseIs this not apt to explain what is being done? It is possible I've been in the trenches long enough to not know if it doesn't map well https://github.com/dimforge/nalgebra/pull/1067/files#diff-a056a4c6b4fb629735a73fcb65e0d5223386cb47dedb6ab80d5a1b905ebd9d25R198There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for me it's not clear why we need to do any kind of "post processing" at all. I think this would be nice to have a comment explaining this. And moreover, I would be very surprised if LAPACK expects you to use approximate comparison if it's signaling something through values - are you sure exact comparison is not more appropriate? I just don't have enough context to know what is the appropriate thing to do here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fair. I think there is some level of post processing expected but not to the extent I have done which is more a result of not trusting LAPACK which may be unwarranted. The changes reflect what I think LAPACK expects us to do minimally