Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QZ-decomposition #1067

Merged
merged 31 commits into from Apr 30, 2022
Merged
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
7230ae1
First attempt at xgges (qz decomposition), passing tests. Serializati…
metric-space Jan 19, 2022
6f7ef38
Format file
metric-space Jan 19, 2022
769f20c
Comments more tailored to QZ
metric-space Jan 19, 2022
b2c6c6b
Add non-naive way of calculate generalized eigenvalue, write spotty t…
metric-space Jan 20, 2022
6a28306
Commented out failing tests, refactored checks for almost zeroes
metric-space Jan 20, 2022
7e9345d
Correction for not calculating absolurte value
metric-space Jan 21, 2022
7d8fb3d
New wrapper for generalized eigenvalues and associated eigenvectors v…
metric-space Jan 25, 2022
748848f
Cleanup of QZ module and added GE's calculation of eigenvalues as a t…
metric-space Jan 25, 2022
ae35d1c
New code and modified tests for generalized_eigenvalues
metric-space Feb 3, 2022
162bb32
New code and modified tests for qz
metric-space Feb 3, 2022
d88337e
Formatting
metric-space Feb 3, 2022
55fdd84
Formatting
metric-space Feb 3, 2022
4362f00
Added comment on logic
metric-space Feb 4, 2022
4038e66
Correction to keep naming of left and right eigenvector matrices cons…
metric-space Feb 4, 2022
d5069f3
Removed extra memory allocation for buffer (now redundant)
metric-space Feb 6, 2022
a4de6a8
Corrected deserialization term in serialization impls
metric-space Feb 9, 2022
497a4d8
Correction in eigenvector matrices build up algorithm
metric-space Feb 12, 2022
fb0cb51
Remove condition number, tests pass without. Add proper test generato…
metric-space Feb 12, 2022
b7fe6b9
Name change
metric-space Feb 12, 2022
91b7e05
Change name of test generator
metric-space Feb 12, 2022
7510d48
Doc string corrections
metric-space Feb 12, 2022
e52f117
Change name of copied macro base
metric-space Feb 12, 2022
5e10ca4
Add another case for when eigenvalues should be mapped to zero. Make …
metric-space Feb 15, 2022
c8a920f
Minimal post-processing and fix to documentation
metric-space Feb 27, 2022
3413ab7
Correct typos, move doc portion to comment and fix borrow to clone
metric-space Mar 5, 2022
4413a35
Fix doc
metric-space Mar 5, 2022
adf50a6
Fix formatting
metric-space Mar 5, 2022
2743eef
Add in explicit type of matrix element to module overview docs
metric-space Mar 5, 2022
80a844a
Update check for zero
metric-space Apr 16, 2022
bc31012
Add newline
metric-space Apr 16, 2022
ff2d431
Remove repeated docs
metric-space Apr 16, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
82 changes: 36 additions & 46 deletions nalgebra-lapack/src/generalized_eigenvalues.rs
Expand Up @@ -180,25 +180,44 @@ where
}

/// Calculates the generalized eigenvectors (left and right) associated with the generalized eigenvalues
/// Outputs two matrices, the first one containing the left eigenvectors of the generalized eigenvalues
/// as columns and the second matrix contains the right eigenvectors of the generalized eigenvalues
/// as columns
/// Outputs two matrices.
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't there need to be an extra newline here to make sure only the first sentence ends up in the module overview?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, not module overview, but the docs for the struct. Please render them (cargo doc) and see :)

/// The first output matix contains the left eigenvectors of the generalized eigenvalues
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "matix"

/// as columns.
/// The second matrix contains the right eigenvectors of the generalized eigenvalues
/// as columns.
///
/// The right eigenvector v(j) corresponding to the eigenvalue lambda(j)
/// of (A,B) satisfies
/// The right eigenvector v(j) corresponding to the eigenvalue lambda(j)
/// of (A,B) satisfies
///
/// A * v(j) = lambda(j) * B * v(j).
/// A * v(j) = lambda(j) * B * v(j)
///
/// The left eigenvector u(j) corresponding to the eigenvalue lambda(j)
/// of (A,B) satisfies
/// The left eigenvector u(j) corresponding to the eigenvalue lambda(j)
/// of (A,B) satisfies
///
/// u(j)**H * A = lambda(j) * u(j)**H * B .
/// where u(j)**H is the conjugate-transpose of u(j).
/// u(j)**H * A = lambda(j) * u(j)**H * B
/// where u(j)**H is the conjugate-transpose of u(j).
///
/// How the eigenvectors are build up:
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"built"?

///
/// Since the input entries are all real, the generalized eigenvalues if complex come in pairs
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm very confused by this sentence. It says "if complex". But don't we always have to expect complex eigenvalues and eigenvectors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what you mean. Real is a subset of Complex Numbers, if we are talking about a mix of complex eigenvalues and real eigenvalues, why not refer to them by the generalized type?

LAPACK does it explicitly here

If ALPHAI(j) is zero, then the j-th eigenvalue is real;

My guess is that computationally in this case (going via the LAPACK docs) complex numbers i.e those with a non-zero imaginary part seem like the exception and the ones with a the 0 imaginary part don''t seem to add any value (computational resource wise) being seen as explicitly complex. Or atleast that seem to be what I gathered

Fixed doc

Edit:misunderstood intent

/// as a consequence of <https://en.wikipedia.org/wiki/Complex_conjugate_root_theorem>
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how this link renders - not familiar with angle bracket syntax here, but maybe use "proper" linking to make the text overall more readable (URLs break the flow of reading)? E.g. the [the complex conjugate root theorem](https://en.wikipedia.org/wiki/Complex_conjugate_root_theorem).

/// The Lapack routine output reflects this by expecting the user to unpack the complex eigenvalues associated
/// eigenvectors from the real matrix output via the following procedure
///
/// (Note: VL stands for the lapack real matrix output containing the left eigenvectors as columns,
/// VR stands for the lapack real matrix output containing the right eigenvectors as columns)
///
/// If the j-th and (j+1)-th eigenvalues form a complex conjugate pair,
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the user know this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to a comment, so the caller doesn't have to know the details unless they inspect the source code

/// then
///
/// u(j) = VL(:,j)+i*VL(:,j+1)
/// u(j+1) = VL(:,j)-i*VL(:,j+1)
///
/// and
///
/// u(j) = VR(:,j)+i*VR(:,j+1)
/// v(j+1) = VR(:,j)-i*VR(:,j+1).
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still very confused about this. Isn't this what's going on internally in the method below? The docs here are written as if the user needs to do this after calling this method, but it looks to me that the method actually takes care of all this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree the user doesn't need to know unless they inspect the source code. Portion moved to comment within function body

///
/// What is going on below?
/// If the j-th and (j+1)-th eigenvalues form a complex conjugate pair,
/// then u(j) = VSL(:,j)+i*VSL(:,j+1) and u(j+1) = VSL(:,j)-i*VSL(:,j+1).
/// and then v(j) = VSR(:,j)+i*VSR(:,j+1) and v(j+1) = VSR(:,j)-i*VSR(:,j+1).
pub fn eigenvectors(self) -> (OMatrix<Complex<T>, D, D>, OMatrix<Complex<T>, D, D>)
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing I did not realize before now: Here we take self by value, but in the implementation we still clone self.vsl and self.vsr. This seems a little unfortunate, perhaps?

where
DefaultAllocator:
Expand All @@ -216,18 +235,14 @@ where
.clone()
.map(|x| Complex::new(x, T::RealField::zero()));

let eigenvalues = &self.eigenvalues();
let eigenvalues = &self.raw_eigenvalues();

let mut c = 0;

let epsilon = T::RealField::default_epsilon();

while c < n {
if eigenvalues[c].im.abs() > epsilon && c + 1 < n && {
let e_conj = eigenvalues[c].conj();
let e = eigenvalues[c + 1];
(&e_conj.re).ulps_eq(&e.re, epsilon, 6) && (&e_conj.im).ulps_eq(&e.im, epsilon, 6)
} {
if eigenvalues[c].0.im.abs() > epsilon && c + 1 < n {
Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still uneasy about this check. Does the LAPACK manual/docs have any recommendations for how to interpret the data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the LAPACK docs:

If ALPHAI(j) is zero, then the j-th eigenvalue is real;

alphai is a double precision array as per LAPACK.
That's as far as I've gathered from the docs as how to interpret the data

The alphai in the docs gets pushed into the imaginary portion of the first elements of the tuples in the list of tuples returned by raw_eigenvalues

Copy link
Sponsor Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if LAPACK explicitly sets this to zero, or if it just happens to become "close enough" to zero? From reading that I'd say that the only "safe" thing to do is to use an equality comparison (==), because any attempt at approximate comparison might have ill-intended effects. But I also don't know enough about what LAPACK is doing here and what it expects users to do...

Certainly the current epsilon check breaks down if the matrix doesn't happen to have its max eigenvalue scaled to roughly ~1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Andlon Apologies for the radio silence. So I thought the best way to check if LAPACK suggest equality over close enough was to sub the equal operator in and run the tests. It fails the tests

You've been driving the scaled eigenvalue point for a while, it was only now the seriousness struck. Apologies for that.
So LAPACK definitely doesn't take care of the scaled problem, which is evident if you apply random scaling to both A and B (input matrices) for the tests (the tests fail)

I can go ahead and in the method that assembles the eigenvectors by looking at the eigenvalues, bring back the eigenvalue method I deleted, obtain the max eigenvalue and scale the eigenvalue vector so that the max eigenvalue is ~1

Are we in agreement that this is the way forward?

Again I think the tests say LAPACK doesn't deal with the problem and if for some reason it(LAPACK) did recommend actual equality, the tests sure say the opposite i.e "close enough" is what we need

// taking care of the left eigenvector matrix
l.column_mut(c).zip_apply(&self.vsl.column(c + 1), |r, i| {
*r = Complex::new(r.re.clone(), i.clone());
Expand All @@ -253,32 +268,7 @@ where
(l, r)
}

// only used for internal calculation for assembling eigenvectors based on realness of
// eigenvalues and complex-conjugate checks of subsequent non-real eigenvalues
fn eigenvalues(&self) -> OVector<Complex<T>, D>
where
DefaultAllocator: Allocator<Complex<T>, D>,
{
let mut out = Matrix::zeros_generic(self.vsl.shape_generic().0, Const::<1>);

let epsilon = T::RealField::default_epsilon();

for i in 0..out.len() {
out[i] = if self.beta[i].clone().abs() < epsilon
|| (self.alphai[i].clone().abs() < epsilon
&& self.alphar[i].clone().abs() < epsilon)
{
Complex::zero()
} else {
Complex::new(self.alphar[i].clone(), self.alphai[i].clone())
* (Complex::new(self.beta[i].clone(), T::RealField::zero()).inv())
}
}

out
}

/// outputs the unprocessed (almost) version of generalized eigenvalues ((alphar, alpai), beta)
/// outputs the unprocessed (almost) version of generalized eigenvalues ((alphar, alphai), beta)
/// straight from LAPACK
#[must_use]
pub fn raw_eigenvalues(&self) -> OVector<(Complex<T>, T), D>
Expand Down