New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autosave causes out of shared memory errors in batch processing #1582
Comments
junixar
pushed a commit
to junixar/pgjdbc
that referenced
this issue
Oct 15, 2019
release auto save points in batch processing in order to avoid out of shared memory error fix for the issue pgjdbc#1582
5 tasks
I think this was fixed here: #1409 |
In #1409 it was fixed only for executeQuery with single statement, but not for batch executions. This is the point in this new issue. |
I can confirm that, too. |
davecramer
pushed a commit
that referenced
this issue
Oct 30, 2019
release auto save points in batch processing in order to avoid out of shared memory error fix for the issue #1582
Thanks :-) |
fixed in #1583 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm submitting a bug that occurs by using of save points and batch processing
Describe the issue
Analog to the issue #1407 , if AutoSave.ALWAYS is set, the save points are not released after executing of batch, which causes "out of shared memory" errors after a greater number of batches have been executed.
Driver Version?
42.2.6
Java Version?
1.8.0_221
OS Version?
Debian GNU/Linux 9
PostgreSQL Version?
PostgreSQL 11.5
To Reproduce
Steps to reproduce the behaviour:
Expected behaviour
The following code demonstrates the problem. With default settings the test runs on my machine for about 12000 iterations before the error is thrown. If the "cleanup" is activated is runs through just fine.
The text was updated successfully, but these errors were encountered: