Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimizing trait bound complexity when working with dimensional generics #1365

Open
gokuldharan opened this issue Feb 29, 2024 · 3 comments
Open

Comments

@gokuldharan
Copy link

I've ended up with a ridiculously gnarly function signature due to a ton of trait bounds being required to support some simple operations I want to make on generic-dimension static matrices. Specifically, I want to remove rows and columns while being able to add and reallocate (snippet below). Surely there's an easier way to do this that I've completely missed? Any help would be hugely appreciated!

https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=2718224f6cb54b60c95e378be8a0d2cd

   use nalgebra::{
        allocator::{Allocator, Reallocator},
        ArrayStorage, Const, DefaultAllocator, DimDiff, DimSub, Matrix, SMatrix, SVector, Storage,
        ToTypenum, U1,
    };

    #[inline(always)]
    #[allow(clippy::too_many_arguments)]
    fn qp_reduce<const N: usize, const M: usize>(
        Q: &SMatrix<f64, N, N>,
        c: &SVector<f64, N>,
        A: &SMatrix<f64, M, N>,
        b: &SVector<f64, M>,
        xk: &SVector<f64, N>,
        limit_index_set: &SVector<u32, N>,
        ls_weights: &SVector<f64, M>,
        violating_index: usize,
        delta_x: &mut SVector<f64, N>,
        scale: &mut f64,
    ) where
        Const<N>: ToTypenum,
        Const<M>: ToTypenum,
        <Const<N> as ToTypenum>::Typenum: DimSub<Const<M>>,
        Const<M>: DimSub<U1>,
        Const<N>: DimSub<U1>,

        DefaultAllocator: Reallocator<f64, Const<M>, Const<N>, Const<M>, DimDiff<Const<N>, U1>>,
        ArrayStorage<f64, M, N>: Storage<f64, Const<M>, Const<N>>,
        DefaultAllocator: Allocator<f64, <Const<N> as DimSub<Const<1>>>::Output, Const<N>>,
        DefaultAllocator:
            Reallocator<f64, Const<N>, Const<N>, <Const<N> as DimSub<Const<1>>>::Output, Const<N>>,
        ArrayStorage<f64, N, N>: Storage<f64, Const<N>, Const<N>>,
        DefaultAllocator: Allocator<
            f64,
            <Const<N> as DimSub<Const<1>>>::Output,
            <Const<N> as DimSub<Const<1>>>::Output,
        >,
        DefaultAllocator: Reallocator<
            f64,
            <Const<N> as DimSub<Const<1>>>::Output,
            Const<N>,
            <Const<N> as DimSub<Const<1>>>::Output,
            <Const<N> as DimSub<Const<1>>>::Output,
        >,
        DefaultAllocator: Allocator<f64, <Const<N> as DimSub<Const<1>>>::Output>,
        DefaultAllocator:
            Reallocator<f64, Const<N>, Const<1>, <Const<N> as DimSub<Const<1>>>::Output, Const<1>>,
        Matrix<f64, Const<N>, Const<1>, ArrayStorage<f64, N, 1>>:
            std::ops::Add<Matrix<f64, Const<N>, Const<1>, ArrayStorage<f64, N, 1>>>,
        ArrayStorage<f64, N, 1>: Storage<f64, Const<N>>,
        DefaultAllocator: Allocator<u32, <Const<N> as DimSub<Const<1>>>::Output>,
        DefaultAllocator:
            Reallocator<u32, Const<N>, Const<1>, <Const<N> as DimSub<Const<1>>>::Output, Const<1>>,
        ArrayStorage<u32, N, 1>: Storage<u32, Const<N>>,
    {
        let xk_delta_x = *xk + *delta_x;
        let A_reduced = A.remove_column(violating_index);
        let Q_reduced = Q.remove_row(violating_index).remove_column(violating_index);
        let c_reduced = c.remove_row(violating_index);
        let xk_reduced = xk.remove_row(violating_index);
        let limit_set_reduced = limit_index_set.remove_row(violating_index);
    }
@Ralith
Copy link
Collaborator

Ralith commented Mar 1, 2024

Have you considered using dynamically-sized matrices? What are some typical dimensions for your application?

@gokuldharan
Copy link
Author

@Ralith No larger than 10x10, but this is going to be in the critical path of some code that needs to be highly performant. For a ballpark, I'd estimate this would be the core of a QP solver that would need to allocate all these vectors somewhere between 1000 and 100,000 of times per second.

@Ralith
Copy link
Collaborator

Ralith commented Mar 13, 2024

I can't specifically help with the generics here, but bear in mind that copying 800 byte structures all over the stack is not necessarily going to be the optimal strategy. Perhaps there's a way you could reuse dynamically allocated storage?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants