You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a summary issue to track all the challenges with batches and especially non-uniform batches. It has all relevant issues and PRs linked to it so we can close those and still keep track of them.
The current batch code is pretty brittle, hard to understand and extend. It might be a good time to take a new, fresh look.
Currently, the Jdbi code analyzes the structure for a batch operation by analyzing the very first line in the batch. It assumes that all following operations use the same argument types and structure and therefore caches the argument type resolution and tries to bind the arguments for following batch lines using the cached type resolution. This fails if the lines are non-uniform, e.g. an argument binds an integer for one line and a string for the next or a varying number of arguments.
Batch derives all of its speed from this caching. But to solve this problem for non-uniform batches, the solution would be to not cache which has no performance advantages over a simple loop executing the same statement again and again and resolving the argument bindings every time.
We spent a bit of time discussing this on the slack channel, there is a proposal to use "templates" where multiple sets of resolved arguments ("argument templates") are created and then a batch line can choose which template to use. This loosens up the rigid "every set of batch arguments must be uniform" but does not go all the way to "resolve the arguments anew for every line".
This is a summary issue to track all the challenges with batches and especially non-uniform batches. It has all relevant issues and PRs linked to it so we can close those and still keep track of them.
The current batch code is pretty brittle, hard to understand and extend. It might be a good time to take a new, fresh look.
Issues:
PreparedBatch.bindPojos
#1604challenges with non-uniform batches
Currently, the Jdbi code analyzes the structure for a batch operation by analyzing the very first line in the batch. It assumes that all following operations use the same argument types and structure and therefore caches the argument type resolution and tries to bind the arguments for following batch lines using the cached type resolution. This fails if the lines are non-uniform, e.g. an argument binds an integer for one line and a string for the next or a varying number of arguments.
Batch derives all of its speed from this caching. But to solve this problem for non-uniform batches, the solution would be to not cache which has no performance advantages over a simple loop executing the same statement again and again and resolving the argument bindings every time.
We spent a bit of time discussing this on the slack channel, there is a proposal to use "templates" where multiple sets of resolved arguments ("argument templates") are created and then a batch line can choose which template to use. This loosens up the rigid "every set of batch arguments must be uniform" but does not go all the way to "resolve the arguments anew for every line".
Issues:
bindList
withprepareBatch
when lists have varied size. #2335PRs:
The text was updated successfully, but these errors were encountered: