Skip to content

How to Rotate Credentials for BFD Database

Brandon Cruz edited this page Oct 14, 2023 · 4 revisions

How to Rotate Credentials for BFD Database

Follow this runbook to successfully rotate database credentials for the following:

  • BFD Server
  • BFD Pipeline
  • BFD Migrator

Note: If there are pending deployments or db migrations, make sure those finish before running these steps.

Background

This runbook details steps to successfully change BFD database access credentials with zero-downtime. For those applications that must support zero-downtime (BFD Server, ETL Pipeline), BFD supports both a current (active) user+password as well as a future (stand-by) user+password. The Migrator is a one-and-done instance, initiated as a step in the BFD CI/CD deployment; as such, zero-downtime constraints are not applicable.

Fundamentally, this runbook encompasses database-specific tasks and application-specific tasks, that when completed allows for a zero-downtime deployment supporting new database credentials.

  1. BFD database access is driven by Postgres security roles based on username/password; for example, BFD server will only ever require READ access while the ETL Pipeline requires READ-WRITE access. BFD usernames and associated roles are:
      Role name        |    Attributes   |  Member of                                               | Description  
-----------------------+-----------------+-------------------------------------------------------------+-------------------------
 svc_bfd_pipeline_0    |                 | {api_pipeline_svcs}                                         | RW access, Zero downtime
 svc_bfd_pipeline_1    |                 | {api_pipeline_svcs}                                         | RW access, Zero downtime
 svc_bfd_server_0      |                 | {api_reader_svcs}                                           | RO access, Zero downtime
 svc_bfd_server_1      |                 | {api_reader_svcs}                                           | RO access, Zero downtime
 svc_fhirdb_migrator   | Create role     | {api_migrator_svcs}                                         | NO zero downtime
 rds_superuser         | Cannot login    | {pg_monitor,pg_signal_backend,rds_password,rds_replication} | 

As the above indicates, both the ETL Pipeline and BFD Server each have two distinct usernames; this lends itself to a simple toggling mechanism, where one username is the current active user, while the other (second) username is available for initiating credential rotation. The BFD Migrator requires no special toggling of user credentials since it operates within the CI/CD deployment process. While this HowTo will generally treat BFD Server and ETL Pipeline as both needing credential rotation, each may be updated individually as well as in tandem.

  1. For the remainder of this HowTo, assume the following:
  • The current (active) BFD Server username is: svc_bfd_server_0
  • The current (active) ETL Pipeline username is: svc_bfd_pipeline_0
  • We will be changing user credentials in all three BFD database environments (PROD, PROD-SBX, and TEST)
  • The BFD AWS SSM parameter Store holds the following in each environment:
/bfd/${env}/server/sensitive/data_server_db_password=sUperSecret$$^password
/bfd/${env}/server/sensitive/data_server_db_username=svc_bfd_server_0
/bfd/${env}/pipeline/shared/sensitive/data_pipeline_db_password=AN0therSUperSecret$$^password
/bfd/${env}/pipeline/shared/sensitive/data_pipeline_db_username=svc_bfd_pipeline_0
  1. Log into BFD database as super user (rds_superuser) or a user whose role allows one to ALTER user attributes; this will need to be done for each BFD database environment (PROD, PROD-SBX, TEST)

- In AWS S3, the RIF folder (i.e. ```<yyyy>-<MM>-<dd>T<HH>:<mm>:<ss>Z```) containing the data for reloading will still be in 'Incoming' with the file S3 file structure as:
    ```
    <S3 Bucket Name>-<aws-account-id>
    │
    └───Incoming/
    │   │
    │   └───2022-09-23T13:44:55Z/
    │   │    │   *_manifest.xml
    │   │    │   *.rif
    │   │    │   ...
    │   │ 
    │   └───...
    │   
    └───Done/
    │    │   
    │    └───...
    ```
    The AWS S3 bucket name in the file structure above can be found within the ETL EC2 instance by running ```grep S3_BUCKET_NAME /bluebutton-data-pipeline/bfd-pipeline-service.sh | cut -f2 -d=```.
  1. Check if the pipeline is running with sudo systemctl status bfd-pipeline, and if so, stop it with sudo systemctl stop bfd-pipeline.

  2. In the EC2 instance enable idempotent mode for the pipeline:

    • Open the file /bluebutton-data-pipeline/bfd-pipeline-service.sh.
    • Change the line export IDEMPOTENCY_REQUIRED='false' to export IDEMPOTENCY_REQUIRED='true'.
    • Save and close the file.
  3. Restart the pipeline with sudo systemctl start bfd-pipeline.

  4. Confirm restarting the pipeline and loading data in idempotent mode is successful:

    • The output of running sudo systemctl status bfd-pipeline should say "active(running) since …".

    • As data is loading check the logs by running tail /bluebutton-data-pipeline/bluebutton-data-pipeline.log -f.

    • When data is loaded properly, in AWS S3, the RIF folder containing the data for reloading will have automatically moved from 'Incoming' to 'Done' with the file S3 file structure as:

      <S3 Bucket Name>-<aws-account-id>
      │
      └───Incoming/
      │   │
      │   └───...
      │   
      └───Done/
      │   │   
      │   └───2022-09-23T13:44:55Z/
      │   │   │   *_manifest.xml
      │   │   │   *.rif
      │   │   │  ...
      │   │ 
      │   └───...
  5. With the data successfully loaded, in the EC2 instance, make sure to disable idempotent mode for the pipeline again:

    • Open the file /bluebutton-data-pipeline/bfd-pipeline-service.sh.
    • Change the line export IDEMPOTENCY_REQUIRED='true' to export IDEMPOTENCY_REQUIRED='false'.
    • Save and close the file.
  6. Restart the pipeline sudo systemctl restart bfd-pipeline.

Clone this wiki locally