Skip to content

Latest commit

 

History

History
586 lines (440 loc) · 30.2 KB

patroni-management.md

File metadata and controls

586 lines (440 loc) · 30.2 KB

Patroni

About

Here the link to the video of the runbook simulation.

Patroni is an open source project for providing automatic HA solution for PostgreSQL databases. It also has centralized configuration managment capabilities and it fully integrated with Consul to provide consensus, and dns services. Each patroni instance have a consul agent, who act as a proxy for the GitLab Consul fleet.

Patroni binaries runs in the same host as the PostgreSQL database. In a sense, Patroni can be seen as a way of manage a PostgreSQL instance. Configuration and general operations of PostgreSQL instances is now achieved by using the corresponding Patroni command to do so.

Scaling the cluster up

Here the link to the video of the runbook simulation.

  1. Increase the node count of patroni in terraform.
  2. Apply the terraform and wait for chef to converge.
  3. On the new box, gitlab-patronictl list and ensure that the other cluster members are identical to those seen by running the same command on another cluster member.
  4. systemctl enable patroni && systemctl start patroni (for some reason we do not automate this yet, but this operation is rare).
  5. Follow the patroni logs. Current implementation for bootstraping new nodes implies taking a pg_basebackup from the leader. This will take several hours, after which point streaming replication will begin. Silence alerts as necessary.

Scaling the cluster down

Here the link to the video of the runbook simulation.

  1. Using this method, we can only delete the highest index of patroni. Make sure it isn't the primary!
  2. Take the replica out of the read replica pool, and ensure it doesn't become the primary if we are unlucky enough to suffer a failover while scaling the cluster down: follow the replica maintenance instructions.
  3. Decrease the node count of patroni in terraform. Review the terraform plan carefully, and then apply it.

Patroni basics

Patroni configuration file

Is located at /var/opt/gitlab/patroni/patroni.yml. It contains all the directives to configure patroni and PostgreSQL instance too. It is divided in sections, and the main ones are:

  • scope: Cluster name
  • name: Name of this instance
  • consul: Location of the consul agent (usually localhost)
  • bootstrap: Directives for configure and deploy new instances
  • postgresql: General configuration settings for PostgreSQL

Checking service status

Patroni configuration file

Is located at /var/opt/gitlab/patroni/patroni.yml. It contains all the directives to configure patroni and PostgreSQL instances. It's divided in sections, the main ones being:

  • scope: Cluster name
  • name: Name of this instance
  • consul: Location of the consul agent (usually localhost)
  • bootstrap: Directives to configure and deploy new instances
  • postgresql: General configuration settings for PostgreSQL

Checking service status

The patroni service is managed with systemd, so you can check the service status with systemctl status patroni and logs with journalctl -u patroni (it should be enabled and running).

Run gitlab-patronictl list to check the state of the patroni cluster, you should see the new node join the cluster and go through the following states:

  • creating replica
  • starting
  • running

the node will also be added to the consul DNS entry, you can verify that with:

$ dig @127.0.0.1 -p8600 +short replica.patroni.service.consul.

At the moment of writing the database is ~7TB big and it takes ~4h for a new node to catch up.

Cluster information

Run gitlab-patronictl list on any Patroni member to list all the cluster members and their statuses. There can be only a single Leader for the cluster.

A leader change is an important event. When unexpected, this leader change is call a FailOver. Under controled circumstances (i.e. when upgrading), this is call a SwitchOver, and Patroni has a special command for doing this.

patroni-01-db-gstg $ gitlab-patronictl list
+-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+
|     Cluster     |                     Member                     |      Host     |  Role  |  State  | TL | Lag in MB |
+-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+
| pg11-ha-cluster | patroni-01-db-gstg.c.gitlab-staging-1.internal | 10.224.29.101 |        | running |  8 |           |
| pg11-ha-cluster | patroni-02-db-gstg.c.gitlab-staging-1.internal | 10.224.29.102 |        | running |  8 |           |
| pg11-ha-cluster | patroni-04-db-gstg.c.gitlab-staging-1.internal | 10.224.29.104 | Leader | running |  8 |         0 |
| pg11-ha-cluster | patroni-05-db-gstg.c.gitlab-staging-1.internal | 10.224.29.105 |        | running |  8 |         0 |
| pg11-ha-cluster | patroni-06-db-gstg.c.gitlab-staging-1.internal | 10.224.29.106 |        | running |  8 |         0 |
+-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+

You may want to use it in conjunction with watch, to automatically refresh the command each 30 seconds or so:

+-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | +-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+ | pg11-ha-cluster | patroni-01-db-gstg.c.gitlab-staging-1.internal | 10.224.29.101 | | running | 8 | | | pg11-ha-cluster | patroni-02-db-gstg.c.gitlab-staging-1.internal | 10.224.29.102 | | running | 8 | | | pg11-ha-cluster | patroni-04-db-gstg.c.gitlab-staging-1.internal | 10.224.29.104 | Leader | running | 8 | 0 | | pg11-ha-cluster | patroni-05-db-gstg.c.gitlab-staging-1.internal | 10.224.29.105 | | running | 8 | 0 | | pg11-ha-cluster | patroni-06-db-gstg.c.gitlab-staging-1.internal | 10.224.29.106 | | running | 8 | 0 | +-----------------+------------------------------------------------+---------------+--------+---------+----+-----------+


You may want to use it in conjunction with `watch`, to automatically refresh the command each 30 seconds or so:

watch -n 30 'gitlab-patronictl list'



Under certain circumnstances, you may see an extra column labeled `Pending restart`, and look like this:

+-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+ | Cluster | Member | Host | Role | State | TL| Lag in MB | Pending restart| +-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+ | pg11-ha-cluster | patroni-01-db-gstg.c.gitlab-staging-1.internal| 10.224.29.101| | running | 8| | | | pg11-ha-cluster | patroni-02-db-gstg.c.gitlab-staging-1.internal| 10.224.29.102| | running | 8| | * | | pg11-ha-cluster | patroni-04-db-gstg.c.gitlab-staging-1.internal| 10.224.29.104| Leader | running | 8| 0 | * | | pg11-ha-cluster | patroni-05-db-gstg.c.gitlab-staging-1.internal| 10.224.29.105| | running | 8| 0 | | | pg11-ha-cluster | patroni-06-db-gstg.c.gitlab-staging-1.internal| 10.224.29.106| | running | 8| 0 | | +-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+

watch -n 30 'gitlab-patronictl list'

Under certain circumnstances, you may see an extra column labeled Pending restart, which looks like this:

+-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+
|     Cluster     |                     Member                    |      Host    |  Role  |  State  | TL| Lag in MB | Pending restart|
+-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+
| pg11-ha-cluster | patroni-01-db-gstg.c.gitlab-staging-1.internal| 10.224.29.101|        | running |  8|           |                |
| pg11-ha-cluster | patroni-02-db-gstg.c.gitlab-staging-1.internal| 10.224.29.102|        | running |  8|           |        *       |
| pg11-ha-cluster | patroni-04-db-gstg.c.gitlab-staging-1.internal| 10.224.29.104| Leader | running |  8|         0 |        *       |
| pg11-ha-cluster | patroni-05-db-gstg.c.gitlab-staging-1.internal| 10.224.29.105|        | running |  8|         0 |                |
| pg11-ha-cluster | patroni-06-db-gstg.c.gitlab-staging-1.internal| 10.224.29.106|        | running |  8|         0 |                |
+-----------------+-----------------------------------------------+--------------+--------+---------+---+-----------+----------------+


In the previous example, members patroni-02-db-gstg.c.gitlab-staging-1.internal and patroni-03-db-gstg.c.gitlab-staging-1.internal are waiting for a restart in order to apply some configuration changes.

If you need to know what those Pending restart settings are, execute the following in the instance you need to verify:

sudo gitlab-psql -c "select name, setting,  short_desc, sourcefile, sourceline  from pg_settings where pending_restart"

And a possible result:

      name       | setting |                     short_desc                     |               sourcefile                | sourceline 
-----------------+---------+----------------------------------------------------+-----------------------------------------+------------
 max_connections | 500     | Sets the maximum number of concurrent connections. | /etc/postgresql/11/main/postgresql.conf |         64

Restarting a node

To force the restart of a specific member you can execute (on any node) sudo gitlab-patronictl restart pg11-ha-cluster <member>

In the example above, members patroni-02-db-gstg.c.gitlab-staging-1.internal and patroni-03-db-gstg.c.gitlab-staging-1.internal are waiting for a restart in order to apply some configuration changes.

If you need to know wich those Pending restart settings are, just need to execute

sudo gitlab-psql -c "select name, setting,  short_desc, sourcefile, sourceline  from pg_settings where pending_restart"

And a possible result:

      name       | setting |                     short_desc                     |               sourcefile                | sourceline 
-----------------+---------+----------------------------------------------------+-----------------------------------------+------------
 max_connections | 500     | Sets the maximum number of concurrent connections. | /etc/postgresql/11/main/postgresql.conf |         64

Restarting a node

sudo gitlab-patronictl restart pg11-ha-cluster <member>

Bootstrapping modes

Normal bootstrapping

Normal bootstrapping is when you start a brand-new cluster with zero data. Patroni will create the PostgreSQL database cluster using initdb with the options specified in node['gitlab-patroni']['patroni']['config']['bootstrap']['initdb'].

Standby bootstrapping

Standby bootstrapping is starting a Patroni cluster that replicates from a remote master (i.e. not part of the Patroni cluster).

You need to specify the following Chef attributes to start a cluster in standby mode:

node['gitlab-patroni']['patroni']['config']['bootstrap']['dcs']['standby_cluster'] = {
  "host": "remote.host.com",
  "port": "5432",
  "primary_slot_name": "patroni_repl_slot"
}

You'd need some extra setup on the remote master side:

  1. Create a replication user with a username/password matching the ones you have in node['gitlab-patroni']['patroni']['users']['replication']
    • CREATE USER "gitlab-replicator" LOGIN REPLICATION PASSWORD 'hunter1';
  2. Create a superuser with a username/password matching the ones you have in node['gitlab-patroni']['patroni']['users']['superuser']
    • CREATE USER "gitlab-superuser" LOGIN SUPERUSER REPLICATION PASSWORD 'hunter1';
  3. Create a physical replication slot with the name you specified in primary_slot_name above
    • SELECT * FROM pg_create_physical_replication_slot("patroni_repl_slot");
  4. Allow the replication user into the remote master through pg_hba entries
    • host replication gitlab-replicator 10.0.0.0/8 md5
    • host replication gitlab-replicator 127.0.0.1/32 md5

Configuring PostgreSQL

You can specify any PostgreSQL parameter under node['gitlab-patroni']['postgresql']['parameters'], except for the following parameters:

  • cluster_name
  • wal_level
  • hot_standby
  • max_connections
  • max_wal_senders
  • wal_keep_segments
  • max_prepared_transactions
  • max_locks_per_transaction
  • track_commit_timestamp
  • max_replication_slots
  • max_worker_processes
  • wal_log_hints

These parameters are specifically handled by Patroni for replication purposes. Some of them can't be changed (like wal_log_hints), those that can be changed need to be specified under node['gitlab-patroni']['patroni']['config']['bootstrap']['dcs']['postgresql']['parameters'].

While nothing prevents you from specifying them under node['gitlab-patroni']['postgresql']['parameters'], it will likely confuse Patroni into thinking that the cluster needs a restart (you may see a "Pending Restart" column when running gitlab-patronictl list).

When parameters are updated in Chef and propagated across the cluster, Patroni updates postgresql.conf then signals PostgreSQL to reload the configuration. A restart may still be needed for some parameters, which you can see hints of in the logs, so you may need to run gitlab-patronictl restart pg11-ha-cluster MEMBER_NAME.

Pausing Patroni

Quoting Patroni docs:

Under certain circumstances Patroni needs to temporary step down from managing the cluster, while still retaining the cluster state in DCS. Possible use cases are uncommon activities on the cluster, such as major version upgrades or corruption recovery. During those activities nodes are often started and stopped for the reason unknown to Patroni, some nodes can be even temporary promoted, violating the assumption of running only one master. Therefore, Patroni needs to be able to "detach" from the running cluster, implementing an equivalent of the maintenance mode in Pacemaker.

Pausing the cluster disables automatic failover. This is desirable when doing tasks such as upgrading Consul, or during a maintenance of the whole Patroni cluster.

Note that disabling automatic failover can have undesirable side-effects, for example, if the primary PostgreSQL went down for any reason while Patroni is paused, there will be no primary for the clients to write to, effectively resulting in 500 errors for the end-user, so plan ahead carefully.

It is important to disable chef-client before pausing the cluster, otherwise a regular chef-client can revert the pause status.

To pause the cluster, run the following commands:

workstation        $ knife ssh roles:<env>-base-db-patroni 'sudo chef-client-disable "Database maintenance issue prod#xyz"'
patroni-01-db-gstg $ sudo gitlab-patronictl pause --wait pg11-ha-cluster

You can verify the result of pausing the cluster by running:

patroni-01-db-gstg $ sudo gitlab-patronictl list | grep 'Maintenance mode'

Run these commands to unpause/resume the cluster:

patroni-01-db-gstg $ sudo gitlab-patronictl resume --wait pg11-ha-cluster
workstation        $ knife ssh roles:<env>-base-db-patroni 'sudo chef-client-enable'

Restarting Patroni When Paused

When the cluster is paused, before restarting the Patroni process, it is better to check if Postgres postmaster didn't start when the system clock was skewed for any reason:

patroni-01-db-gstg # postmaster=/var/opt/gitlab/postgresql/data11/postmaster.pid;\
  postpid=$(cat $postmaster | head -1);\
  posttime=$(cat $postmaster | tail -n +3 | head -1);\
  btime=$(cat /proc/stat | grep btime | cut -d' ' -f2);\
  starttime=$(cat /proc/$postpid/stat | awk '{print $22}');\
  clktck=$(getconf CLK_TCK);\
  echo $((($starttime / $clktck + $btime) - $posttime))

If the command above returned a value higher than 3, then Patroni is going to have trouble starting, so it is advised to fix this issue with the help of a DBRE before restarting.

Upgrading Patroni

Patroni version is controlled by Chef, upgrading it should be as simple as changing an attribute in a role or a cookbook. Since Patroni would need to be restarted (and subsequently, PostgreSQL), careful execution of the change is needed to avoid database errors on the client side. While pausing Patroni (see relevant section above) may be employed to restart Patroni without disturbing PostgreSQL, it's not recommended to go this route as converging Chef can undo the pausing action, which can introduce unintended results.

Instead, we recommend, one at a time, putting replicas into maintenance mode (see relevant section below), upgrading Patroni through Chef, then putting replicas out of maintenance. For the primary, we initiate a switchover to one of the upgraded replicas then we upgraded it once it's been demoted.

The exact sequence of upgrading replicas has been encapsulated into an Ansible playbook. The playbook expects an MR in the chef-repo project to be specified in variables.yml under the target environment, and an API token to be used to merge such MR.

The playbook can be run as follows:

$ git clone git@gitlab.com:gitlab-com/gl-infra/ansible-migrations.git
$ cd ansible-migrations
# Change relevant MRs in variables.yml
$ tmux
$ OPS_API_TOKEN=secure-token MIGRATION_ENV=gprd-or-gstg ansible-playbook -i production-1172/inventory.txt -M ./modules/ -e @production-1172/variables.yml production-1172/playbook.yml

Replica Maintenance

If clients are connecting to replicas by means of service discovery (as opposed to hard-coded list of hosts), you can temporarly remove a replica from the list of hosts used by the clients by tagging it as not suitable for failing over (nofailover: true) and load balancing (noloadbalance: true).

  1. sudo chef-client-disable "Database maintenance issue prod#xyz"

  2. Add a tags section to /var/opt/gitlab/patroni/patroni.yml on the node:

    tags:
      nofailover: true
      noloadbalance: true
    
  3. sudo systemctl reload patroni

  4. Test the efficacy of that reload by checking for the node name in the list of replicas:

    dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV
    

    If the name is absent, then the reload worked.

  5. Wait until all client connections are drained from the replica (it depends on the interval value set for the clients), use this command to track number of client connections:

    while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;';  done  | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    

    Note: Usually there are three pgbouncer instances running on a single replica.

You can see an example of taking a node out of service in this issue.

Permanently marking a replica as in maintenance

The method described above is suitable for short-term maintenance (e.g. 1 hour or less), during which it's acceptable to keep chef-client disabled. However, if the replica is expected to be in maintenance for many hours or days, it is undesirable to keep chef-client disabled during such period.

Using node attributes to set such attributes is not ideal as node attributes are not as visible as roles, and Chef nodes are removed when its GCP node is restarted, so it's not totally persistent.

Alternatively, we can utilize a special Chef role (<env>-base-db-patroni-maintenance) that sets nofailover and noloadbalance to true, and then instruct Terraform to assign it to a particular instance.

  1. In Terraform patroni module, add the following snippet:

    per_node_chef_run_list = {
      "3" = "\"role[${var.environment}-base-db-patroni-maintenance]\""
    }
    

    Replace 3 with the zero-based index of the replica we are targeting. In this example we're targeting patroni-04.

  2. Apply the Terraform change.

  3. In the target replica, run:

    sudo google_metadata_script_runner --script-type startup --debug
    
  4. Verify the replica is in maintenance by checking for the node name in the list of replicas:

    dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV
    

    If the name is absent, then the reload worked.

Legacy Method (Consul Maintenance)

⚠️ This method only works if the clients are configured with a replica.patroni.service.consul. DNS record, it won't work properly if they are configured with db-replica.service.consul. record. Check /var/opt/gitlab/gitlab-rails/etc/database.yml before you proceed.

In the past we have sometimes used consul directly to remove the replica from the replica DNS entry (bear in mind this does not prevent the node from becoming the primary).

patroni-01-db-gstg $ consul maint -enable -service=patroni-replica -reason="Production issue #xyz"

You can verify the action by running:

patroni-01-db-gstg $ dig @127.0.0.1 -p8600 +short replica.patroni.service.consul. | grep $(hostname -I) | wc -l # Prints 0

Wait until all client connections are drained from the replica (it depends on the interval value set for the clients), use this command to track number of client connections:

patroni-01-db-gstg $ while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;';  done  | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done

After you're done with the maintenance, disable Consul service maintenance and verify it:

patroni-01-db-gstg $ consul maint -disable -service=patroni-replica
patroni-01-db-gstg $ dig @127.0.0.1 -p8600 +short replica.patroni.service.consul. | grep $(hostname -I) | wc -l # Prints 1

Failover/Switchover

Failover and Switchover are similar in their end-result, still there are slight differences between them:

  • You can't do a switchover when the cluster has no leader
  • Switchover can be scheduled to happen in a later time
  • You need to specify a member to failover to, switchover does not and it will choose one at random.

That said, you can initiate any of them using gitlab-patronictl switchover or gitlab-patronictl failover and entering values when prompted.

Problems with replication after failover

Sometimes, after a failover, the old primary's timeline will have continued and diverged from the new primary's timeline. Patroni will automatically attempt to pg_rewind the timeline of the old primary to a point at which it can begin replicating from the new primary, becoming healthy again. We have occasionally seen this fail, for example with a statement timeout.

If for whatever reason you can't get the node to a healthy state and don't mind waiting several hours, you can reinitialise the node:

root@pg$ gitlab-patronictl reinit pg11-ha-cluster patroni-XX-db-gprd.c.gitlab-production.internal

This command can be run from any member of the patroni cluster. It wipes the data directory, takes a pg_basebackup from the new primary, and begins replicating again.

Diverged timeline WAL segments in GCS after failover

Our primary Postgres node is configured to archive WAL segments to GCS. These segments are pulled by wal-e on another node in recovery mode, and replayed. This process acts as a continuous test of our ability to restore our database state from archived WAL segments. Sometimes, during a failover, both the old master and the new will have uploaded WAL segments, causing the DR archive that is consuming these segments from GCS to not be able to replay the diverged timeline. In the past we have solved this by rolling back the DR archive to an earlier state:

  1. In the GCE console: stop the machine

  2. Edit the machine: write down (or take a screenshot) of the attachment details of all extra disks. Specfically, we want the custom name (if any) and the order they are attached in.

  3. Detach the data disk and save the machine.

  4. In the GCE console, find the most recent snapshot of the data disk before the incident occurred. Copy its ID.

  5. Find the data disk in GCE. Write down its name, zone, type (standard/SSD), and labels.

  6. Delete the data disk.

  7. Create a new GCE disk with the same name, zone, and type as the old data disk. Select the "snapshot" option as source and enter the snapshot ID.

  8. When the disk has finished creating, attach it to the stopped machine using the GCE console.

  9. Save the machine and examine the order of attached disks. If they are not in the same order as before, you will have to detach and reattach disks as appropriate. This is necessary because unfortunately we still have code that makes assumptions about the udev-ordering of disks (sdb, sdc etc).

  10. Start the machine.

  11. ssh to the machine and start postgres: gitlab-ctl start postgresql.

  12. Tail the log file at /var/log/gitlab/postgresql/current. You should see it successfully ingesting WAL segments in sequential order, e.g.: LOG: restored log file "00000017000128AC00000087" from archive.

  13. You should also see a message "FATAL: the database system is starting up" every 15s. These are due to attempted scrapes by the postgres exporter. After a few minutes, these messages should stop and metrics from the machine should be observable again.

  14. In prometheus, you should see the pg_replication_lag metric for this instance begin to decrease. Recovery from GCS WAL segments is slow, and during times of high traffic (when the postgres data ingestion rate is high) recovery will slow. It might take days to recover, so be sure to silence any replication lag alerts for enough time not to rudely wake the on-call.

  15. Check there is no terraform plan diff for the archival replicas. Run the following for the gprd environment:

    tf plan -out plan -target module.postgres-dr-archive -target module.postgres-dr-delayed
    

    If there is a plan diff for mutable things like labels, apply it. If there is a plan diff for more severe things like disk name, you might have made a mistake and will have to repeat this whole procedure.

This procedure is rather manual and lengthy, but this does not happen often and has no directly user-facing impact.

Replacing a cluster with a new one

Take care when doing these steps, results can be catastrophic

In case there's a need to replace a current cluster with a new one, say, for testing purposes or replication from a remote cluster got messed-up, you can remove the current cluster without the need to destroy and re-create the node.

chef-repo $ knife ssh roles:gstg-base-db-patroni 'sudo systemctl stop patroni'
chef-repo $ knife ssh roles:gstg-base-db-patroni 'sudo rm -rf /var/opt/gitlab/postgresql/data11' # TAKE CARE!
chef-repo $ knife ssh roles:gstg-base-db-patroni 'consul kv delete -recurse service/pg11-ha-cluster'
chef-repo $ knife ssh roles:gstg-base-db-patroni 'sudo systemctl start patroni'

You may need to adjust the Patroni Chef role before restarting the patroni service, like adding the standby config followed by running sudo chef-client across the cluster.

Auditing patroni

Patroni holds a log file under /var/log/gitlab/patroni/patroni.log where patroni activity and messages can be found:

gerardoherzig@patroni-03-db-gstg.c.gitlab-staging-1.internal:~$ sudo tail /var/log/gitlab/patroni/patroni.log
2020-06-26_19:08:55 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:08:55,397 INFO: no action.  i am a secondary and i am following a leader
2020-06-26_19:09:05 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:05,278 INFO: Lock owner: patroni-04-db-gstg.c.gitlab-staging-1.internal; I am patroni-03-db-gstg.c.gitlab-staging-1.internal
2020-06-26_19:09:05 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:05,278 INFO: does not have lock
2020-06-26_19:09:05 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:05,288 INFO: no action.  i am a secondary and i am following a leader
2020-06-26_19:09:15 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:15,279 INFO: Lock owner: patroni-04-db-gstg.c.gitlab-staging-1.internal; I am patroni-03-db-gstg.c.gitlab-staging-1.internal
2020-06-26_19:09:15 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:15,279 INFO: does not have lock
2020-06-26_19:09:15 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:15,290 INFO: no action.  i am a secondary and i am following a leader
2020-06-26_19:09:25 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:25,394 INFO: Lock owner: patroni-04-db-gstg.c.gitlab-staging-1.internal; I am patroni-03-db-gstg.c.gitlab-staging-1.internal
2020-06-26_19:09:25 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:25,394 INFO: does not have lock
2020-06-26_19:09:25 patroni-03-db-gstg patroni[14726]:  2020-06-26 19:09:25,405 INFO: no action.  i am a secondary and i am following a leader

This is mainly useful when investigating an event like an unexpected leader change (failover)