Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SparseMultillinearExtension::evaluate is slow #823

Open
Pratyush opened this issue Apr 24, 2024 · 1 comment
Open

SparseMultillinearExtension::evaluate is slow #823

Pratyush opened this issue Apr 24, 2024 · 1 comment

Comments

@Pratyush
Copy link
Member

It takes time that is linear in the size of the hypercube, as opposed to linear in the number of non-zero coefficients.

@Cesar199999
Copy link

Cesar199999 commented May 14, 2024

Hey @Pratyush, @DimitrisPapac and I have been looking at the SparseMultilinearExtension::evaluate method in poly/src/evaluations/multivariate/multilinear/sparse.rs and we are not sure what you meant by "linear in the size of the hypercube" vs "linear in the number of non-zero coefficients". In SparseMultilinearExtension, the polynomial is represented as a list of non-zero evaluations, which is also the definition of sparsity adopted throughout Arkworks. So, did you mean "linear in the size of the hypercube" vs "linear in the number of non-zero evaluations"?

If that's the case, then the current implementation is already linear in the number of non-zero evaluations. The method fix_variables chooses the window size dynamically as log2 of the number of non-zero evals. When the number of non-zero evals is small/large, it behaves like the naive sparse/dense algorithm. It does so in a pretty optimal way from the looks of it. For reference, we ran some benchmarks using the naive sparse algorithm, i.e., the following code snippet which is linear in the number of non-zero evaluations and the number of variables:

self.evaluations.iter()
    .map(|(&i, &v): (&usize, &F)| v * (0..self.num_vars)
        .map (|k| (i >> k) & 1)
        .enumerate()
        .map(|(j, bit)| if bit == 1 { point[j] } else { F::ONE - point[j] })
        .product::<F>()
    )
    .sum()

and we observed that the current implementation is much more efficient:

SparseMultilinear/Eval with num_vars = 12/64
                        time:   [27.815 µs 28.005 µs 28.236 µs]
                        change: [+183.54% +187.82% +192.19%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 13/64
                        time:   [31.129 µs 31.659 µs 32.155 µs]
                        change: [+172.20% +179.54% +186.73%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 14/128
                        time:   [67.307 µs 68.197 µs 69.064 µs]
                        change: [+214.55% +222.07% +229.83%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 15/128
                        time:   [69.144 µs 69.678 µs 70.292 µs]
                        change: [+220.91% +226.30% +231.55%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 16/256
                        time:   [153.67 µs 155.13 µs 156.80 µs]
                        change: [+247.99% +260.19% +272.32%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 17/256
                        time:   [168.49 µs 170.16 µs 171.91 µs]
                        change: [+320.87% +326.18% +331.39%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 18/512
                        time:   [361.08 µs 365.49 µs 370.68 µs]
                        change: [+358.08% +369.54% +382.02%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 19/512
                        time:   [387.14 µs 392.74 µs 398.55 µs]
                        change: [+342.45% +354.34% +365.11%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 20/1024
                        time:   [813.39 µs 825.67 µs 838.82 µs]
                        change: [+387.20% +400.18% +415.22%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 21/1024
                        time:   [886.92 µs 898.19 µs 911.98 µs]
                        change: [+413.20% +424.94% +436.74%] (p = 0.00 < 0.05)
                        Performance has regressed.

SparseMultilinear/Eval with num_vars = 22/2048
                        time:   [1.7714 ms 1.7876 ms 1.8048 ms]
                        change: [+424.87% +434.76% +443.92%] (p = 0.00 < 0.05)
                        Performance has regressed.

Did you have any other algorithm in mind? If so, could you please point us to a paper or any other resource where we can read about it? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants