Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autosave causes out of shared memory errors in batch processing #1582

Closed
1 of 2 tasks
junixar opened this issue Oct 14, 2019 · 5 comments
Closed
1 of 2 tasks

Autosave causes out of shared memory errors in batch processing #1582

junixar opened this issue Oct 14, 2019 · 5 comments

Comments

@junixar
Copy link
Contributor

junixar commented Oct 14, 2019

I'm submitting a bug that occurs by using of save points and batch processing

  • bug report
  • feature request

Describe the issue
Analog to the issue #1407 , if AutoSave.ALWAYS is set, the save points are not released after executing of batch, which causes "out of shared memory" errors after a greater number of batches have been executed.

Driver Version?
42.2.6

Java Version?
1.8.0_221

OS Version?
Debian GNU/Linux 9

PostgreSQL Version?
PostgreSQL 11.5

To Reproduce
Steps to reproduce the behaviour:

Expected behaviour
The following code demonstrates the problem. With default settings the test runs on my machine for about 12000 iterations before the error is thrown. If the "cleanup" is activated is runs through just fine.

  @Test
  public void testPgsqlJdbcSavepointWithBatchProcessing() throws Exception {
    Properties props = new Properties();
    props.setProperty("username", "postgres");
    props.setProperty("_test_database", "postgres");
    props.setProperty("cleanupSavepoints", "true");

    Connection conn = TestUtil.openDB(props);

    BaseConnection baseConnection = conn.unwrap(BaseConnection.class);
    baseConnection.setAutosave(AutoSave.ALWAYS);
    baseConnection.setAutoCommit(false);

    TestUtil.createTable(conn, "rollbacktest", "a int, str text");

    int iterations = 20000;
    boolean cleanup = false;

    PreparedStatement statement = conn.prepareStatement("insert into rollbacktest(a, str) values (?, ?)");
    for (int i = 0; i < iterations; i++) {
      long startTime = System.nanoTime();

      statement.setInt(1, i);
      statement.setString(2, UUID.randomUUID().toString());
      statement.addBatch();
      statement.executeBatch();

      long timeElapsed = System.nanoTime() - startTime;

      System.out.println(i + "/" + iterations + " took " + timeElapsed / 1000000 + "ms ");

      if (cleanup) {
        baseConnection.setAutosave(AutoSave.NEVER);

        Statement releaseStatement = conn.createStatement();
        releaseStatement.executeUpdate("release savepoint PGJDBC_AUTOSAVE");
        releaseStatement.close();

        baseConnection.setAutosave(AutoSave.ALWAYS);
      }
    }
    statement.close();
  }
junixar pushed a commit to junixar/pgjdbc that referenced this issue Oct 15, 2019
release auto save points in batch processing in order to avoid out of
shared memory error

fix for the issue pgjdbc#1582
@bokken
Copy link
Member

bokken commented Oct 15, 2019

I think this was fixed here: #1409

@junixar
Copy link
Contributor Author

junixar commented Oct 16, 2019

In #1409 it was fixed only for executeQuery with single statement, but not for batch executions. This is the point in this new issue.

@tbrodbeck-adc
Copy link
Contributor

I can confirm that, too.

davecramer pushed a commit that referenced this issue Oct 30, 2019
release auto save points in batch processing in order to avoid out of
shared memory error

fix for the issue #1582
@tbrodbeck-adc
Copy link
Contributor

Thanks :-)

@davecramer
Copy link
Member

fixed in #1583

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants