You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have profiled our proving process and discovered that a significant portion of the process involves decompressing and parsing gates.
We can decompress and parse the gates during the parameters initialization phase, which means we can move this process from the proving phase to the initialization phase. Since parameter initialization now takes place in the background, preprocessing will mostly be finished before the first proving in most cases.
I've tried to implement it, and here are some results of the decreased proving time:
But the size of the decompressed gates is huge, it's about 260 MB. Here are some potential drawbacks of this modification:
Parameter initialization time is likely to increase due to the need to decompress and parse the gates. However, it is important to note that this is a one-time cost during the initialization phase and it can result in significant time savings during the proving phase.
The RAM usage of the application will increase from 420 MB to 680 MB.
WebAssembly.Memory.grow(..) is extremely slow on iOS, so we need to compute the necessary memory size and allocate it with a single call. This results in some time overhead during the parameter initialization phase.
Mobile versions of Chrome and Safari have a strict limit on memory usage (500-800 mb, it depends on fragmentation), and attempting to exceed that limit can cause the application to crash. So this optimization is generally not feasible for mobile devices.
I think that this optimization makes sense for delegated prover and cloud environments, and may also make sense for PC. However, it is definitely not viable for mobile devices due to their limited memory capacity.
I have profiled our proving process and discovered that a significant portion of the process involves decompressing and parsing gates.
We can decompress and parse the gates during the parameters initialization phase, which means we can move this process from the proving phase to the initialization phase. Since parameter initialization now takes place in the background, preprocessing will mostly be finished before the first proving in most cases.
I've tried to implement it, and here are some results of the decreased proving time:
But the size of the decompressed gates is huge, it's about 260 MB. Here are some potential drawbacks of this modification:
I think that this optimization makes sense for delegated prover and cloud environments, and may also make sense for PC. However, it is definitely not viable for mobile devices due to their limited memory capacity.
Related PRs:
Note: It is necessary to update
zkbob-cloud
andzkbob-prover
to include the changes introduced in this PR.The text was updated successfully, but these errors were encountered: