Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgres: use pg_try_advisory_lock instead of pg_advisory_lock #962

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

AkuSilvenius
Copy link

@AkuSilvenius AkuSilvenius commented Jul 26, 2023

Issue

#960

Changes

  • For postgres Lock, use pg_try_advisory_lock instead of pg_advisory_lock to fix the expected behavior of waiting for acquiring lock indefinitely
  • Add config parameter x-lock-retry-max-interval to configure the max interval to be used for exponential backoff when retrying Lock

@coveralls
Copy link

coveralls commented Jan 23, 2024

Coverage Status

coverage: 56.454% (+0.1%) from 56.335%
when pulling 49bee23 on AkuSilvenius:960_add_try_advisory_lock
into 0c456c4 on golang-migrate:master.

@AkuSilvenius AkuSilvenius marked this pull request as ready for review January 23, 2024 10:39
Copy link
Member

@dhui dhui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a comment in case we decide to go this route

break
}

time.Sleep(100 * time.Millisecond)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the backoff library with an exponential backoff + jitter as the default. Note, this may cause other nodes/hosts to take longer to deploy due to the longer wait period so the backoff should be configurable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dhui thanks for the feedback, I've added a configurable exponential retry.

this may cause other nodes/hosts to take longer to deploy due to the longer wait period

I've added defaults and kept them quite small to address the above

@AkuSilvenius
Copy link
Author

unfortunately the tests keep failing because device is running out of space

@eelco
Copy link

eelco commented Apr 18, 2024

I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊

@AkuSilvenius
Copy link
Author

I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊

@eelco thanks for noting, looks like #1072 fixed the tests and pipeline here in the PR is working and passing now - unfortunately this PR got stuck in sleep mode for quite a while due to the unstable pipeline, I hope we can merge this soon :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants