Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

retry_join not working #26297

Open
person50002 opened this issue Apr 8, 2024 · 4 comments
Open

retry_join not working #26297

person50002 opened this issue Apr 8, 2024 · 4 comments
Labels
bug Used to indicate a potential bug core/ha specific to high-availability

Comments

@person50002
Copy link

Describe the bug
Configuring retry_join does not trigger vault to join automatically.

To Reproduce

$ tree
.
├── dc1-vault-01
│   ├── config
│   │   ├── config.hcl
│   │   ├── vault-cert.pem
│   │   └── vault-key.pem
│   ├── file
│   └── logs
├── dc1-vault-02
│   ├── config
│   │   ├── config.hcl
│   │   ├── vault-cert.pem
│   │   └── vault-key.pem
│   ├── file
│   └── logs
└── dc1-vault-03
    ├── config
    │   ├── config.hcl
    │   ├── vault-cert.pem
    │   └── vault-key.pem
    ├── file
    └── logs

12 directories, 9 files

$ sudo docker run --name=dc1-vault-01 --volume ./dc1-vault-01:/vault --net vault-net hashicorp/vault server
$ sudo docker run --name=dc1-vault-02 --volume ./dc1-vault-02:/vault --net vault-net hashicorp/vault server
$ sudo docker run --name=dc1-vault-03 --volume ./dc1-vault-03:/vault --net vault-net hashicorp/vault server

$ sudo docker exec -it dc1-vault-01 /bin/sh
/ # export VAULT_SKIP_VERIFY=true
/ # vault operator init -key-shares=1  -key-threshold=1
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Unseal Key 1: B9n1nwzRZ7X7wXFex24K2jVQlwGqZ1zfMEVeMeA4+8Q=

Initial Root Token: hvs.MxgWNK6aBZFSW2zvnzhmtQBf

Vault initialized with 1 key shares and a key threshold of 1. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 1 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated root key. Without at least 1 keys to
reconstruct the root key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
/ # vault operator unseal B9n1nwzRZ7X7wXFex24K2jVQlwGqZ1zfMEVeMeA4+8Q=
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Key                     Value
---                     -----
Seal Type               shamir
Initialized             true
Sealed                  false
Total Shares            1
Threshold               1
Version                 1.16.0
Build Date              2024-03-25T12:01:32Z
Storage Type            raft
Cluster Name            vault-cluster-bfe079d2
Cluster ID              b2ea86c1-54d7-1bf8-11c0-62401a38a8cb
HA Enabled              true
HA Cluster              n/a
HA Mode                 standby
Active Node Address     <none>
Raft Committed Index    29
Raft Applied Index      29
/ # vault login hvs.MxgWNK6aBZFSW2zvnzhmtQBf
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.MxgWNK6aBZFSW2zvnzhmtQBf
token_accessor       LgD1EOjUusNeFsT5EkwKUfFj
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

dc1-vault-01 remains the only node in the cluster.

/ # vault operator raft list-peers
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Node            Address              State     Voter
----            -------              -----     -----
dc1-vault-01    dc1-vault-01:8201    leader    true
/ #

After joining dc1-vault-02 manually, we have 2 nodes, but dc1-vault-03 is still not in the cluster.

$ sudo docker exec -it dc1-vault-02 /bin/sh
/ # export VAULT_SKIP_VERIFY=true
/ # vault operator raft join -leader-ca-cert=@/vault/config/vault-cert.pem  "https://dc1-vault-01:8200"
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Key       Value
---       -----
Joined    true
/ # vault operator unseal B9n1nwzRZ7X7wXFex24K2jVQlwGqZ1zfMEVeMeA4+8Q=
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       1
Threshold          1
Unseal Progress    0/1
Unseal Nonce       n/a
Version            1.16.0
Build Date         2024-03-25T12:01:32Z
Storage Type       raft
HA Enabled         true
/ # vault login hvs.MxgWNK6aBZFSW2zvnzhmtQBf
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.MxgWNK6aBZFSW2zvnzhmtQBf
token_accessor       LgD1EOjUusNeFsT5EkwKUfFj
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]
/ # vault operator raft list-peers
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Node            Address              State       Voter
----            -------              -----       -----
dc1-vault-01    dc1-vault-01:8201    leader      true
dc1-vault-02    dc1-vault-02:8201    follower    true
/ #

Expected behavior
Would expect all three nodes to join automatically, instead, I have to manually join each node.

Environment:

/ # vault status
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
Key                     Value
---                     -----
Seal Type               shamir
Initialized             true
Sealed                  false
Total Shares            1
Threshold               1
Version                 1.16.0
Build Date              2024-03-25T12:01:32Z
Storage Type            raft
Cluster Name            vault-cluster-bfe079d2
Cluster ID              b2ea86c1-54d7-1bf8-11c0-62401a38a8cb
HA Enabled              true
HA Cluster              https://dc1-vault-01:8201
HA Mode                 standby
Active Node Address     https://dc1-vault-01:8200
Raft Committed Index    91
Raft Applied Index      91
/ # vault version
Vault v1.16.0 (c20eae3e84c55bf5180ac890b83ee81c9d7ded8b), built 2024-03-25T12:01:32Z

Vault server configuration file(s):

dc1-vault-01/config/config.hcl

ui            = true
cluster_addr  = "https://dc1-vault-01:8201"
api_addr      = "https://dc1-vault-01:8200"
disable_mlock = true
log_level = "trace"

storage "raft" {
  path = "/vault/file"
  node_id = "dc1-vault-01"
}

listener "tcp" {
  address       = "0.0.0.0:8200"
  tls_cert_file = "/vault/config/vault-cert.pem"
  tls_key_file  = "/vault/config/vault-key.pem"
  tls_client_ca_file = "/vault/config/vault-cert.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-02:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-03:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

dc1-vault-02/config/config.hcl

ui            = true
cluster_addr  = "https://dc1-vault-02:8201"
api_addr      = "https://dc1-vault-02:8200"
disable_mlock = true
log_level = "trace"

storage "raft" {
  path = "/vault/file"
  node_id = "dc1-vault-02"
}

listener "tcp" {
  address       = "0.0.0.0:8200"
  tls_cert_file = "/vault/config/vault-cert.pem"
  tls_key_file  = "/vault/config/vault-key.pem"
  tls_client_ca_file = "/vault/config/vault-cert.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-01:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-03:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

dc1-vault-03/config/config.hcl

ui            = true
cluster_addr  = "https://dc1-vault-03:8201"
api_addr      = "https://dc1-vault-03:8200"
disable_mlock = true
log_level = "trace"

storage "raft" {
  path = "/vault/file"
  node_id = "dc1-vault-03"
}

listener "tcp" {
  address       = "0.0.0.0:8200"
  tls_cert_file = "/vault/config/vault-cert.pem"
  tls_key_file  = "/vault/config/vault-key.pem"
  tls_client_ca_file = "/vault/config/vault-cert.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-01:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

retry_join {
  leader_api_addr         = "https://dc1-vault-02:8200"
  leader_ca_cert_file     = "/vault/config/vault-cert.pem"
  leader_client_cert_file = "/vault/config/vault-cert.pem"
  leader_client_key_file  = "/vault/config/vault-key.pem"
}

Additional context
dc1-vault-01 logs:

Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --cap-add IPC_LOCK
==> Vault server configuration:

Administrative Namespace:
             Api Address: https://dc1-vault-01:8200
                     Cgo: disabled
         Cluster Address: https://dc1-vault-01:8201
   Environment Variables: HOME, HOSTNAME, NAME, PATH, PWD, SHLVL, VERSION
              Go Version: go1.21.8
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", disable_request_limiter: "false", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: trace
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: raft (HA available)
                 Version: Vault v1.16.0, built 2024-03-25T12:01:32Z
             Version Sha: c20eae3e84c55bf5180ac890b83ee81c9d7ded8b

==> Vault server started! Log data will stream in below:

2024-04-08T11:14:42.207Z [INFO]  proxy environment: http_proxy="" https_proxy="" no_proxy=""
2024-04-08T11:14:42.211Z [DEBUG] storage.raft.fsm: time to open database: elapsed=4.077434ms path=/vault/file/vault.db
2024-04-08T11:14:42.222Z [INFO]  incrementing seal generation: generation=1
2024-04-08T11:14:42.224Z [DEBUG] core: set config: sanitized config="{\"administrative_namespace_path\":\"\",\"api_addr\":\"https://dc1-vault-01:8200\",\"cache_size\":0,\"cluster_addr\":\"https://dc1-vault-01:8201\",\"cluster_cipher_suites\":\"\",\"cluster_name\":\"\",\"default_lease_ttl\":0,\"default_max_request_duration\":0,\"detect_deadlocks\":\"\",\"disable_cache\":false,\"disable_clustering\":false,\"disable_indexing\":false,\"disable_mlock\":true,\"disable_performance_standby\":false,\"disable_printable_check\":false,\"disable_sealwrap\":false,\"disable_sentinel_trace\":false,\"enable_response_header_hostname\":false,\"enable_response_header_raft_node_id\":false,\"enable_ui\":true,\"experiments\":null,\"imprecise_lease_role_tracking\":false,\"introspection_endpoint\":false,\"listeners\":[{\"config\":{\"address\":\"0.0.0.0:8200\",\"tls_cert_file\":\"/vault/config/vault-cert.pem\",\"tls_client_ca_file\":\"/vault/config/vault-cert.pem\",\"tls_key_file\":\"/vault/config/vault-key.pem\"},\"type\":\"tcp\"}],\"log_format\":\"\",\"log_level\":\"trace\",\"log_requests_level\":\"\",\"max_lease_ttl\":0,\"pid_file\":\"\",\"plugin_directory\":\"\",\"plugin_file_permissions\":0,\"plugin_file_uid\":0,\"plugin_tmpdir\":\"\",\"raw_storage_endpoint\":false,\"seals\":[{\"disabled\":false,\"name\":\"shamir\",\"priority\":1,\"type\":\"shamir\"}],\"storage\":{\"cluster_addr\":\"https://dc1-vault-01:8201\",\"disable_clustering\":false,\"raft\":{\"max_entry_size\":\"\"},\"redirect_addr\":\"https://dc1-vault-01:8200\",\"type\":\"raft\"}}"
2024-04-08T11:14:42.224Z [DEBUG] storage.cache: creating LRU cache: size=0
2024-04-08T11:14:42.226Z [INFO]  core: Initializing version history cache for core
2024-04-08T11:14:42.226Z [INFO]  events: Starting event system
2024-04-08T11:14:42.227Z [DEBUG] cluster listener addresses synthesized: cluster_addresses=[0.0.0.0:8201]
2024-04-08T11:14:42.228Z [DEBUG] would have sent systemd notification (systemd not present): notification=READY=1
2024-04-08T11:20:00.258Z [INFO]  core: security barrier not initialized
2024-04-08T11:20:00.258Z [INFO]  core: seal configuration missing, not initialized
2024-04-08T11:20:00.264Z [INFO]  core: security barrier not initialized
2024-04-08T11:20:00.264Z [DEBUG] core: bootstrapping raft backend
2024-04-08T11:20:00.265Z [TRACE] storage.raft: setting up raft cluster
2024-04-08T11:20:00.265Z [TRACE] storage.raft: applying raft config: inputs="map[node_id:dc1-vault-01 path:/vault/file]"
2024-04-08T11:20:00.273Z [INFO]  storage.raft: creating Raft: config="&raft.Config{ProtocolVersion:3, HeartbeatTimeout:5000000000, ElectionTimeout:5000000000, CommitTimeout:50000000, MaxAppendEntries:64, BatchApplyCh:true, ShutdownOnRemove:true, TrailingLogs:0x2800, SnapshotInterval:120000000000, SnapshotThreshold:0x2000, LeaderLeaseTimeout:2500000000, LocalID:\"dc1-vault-01\", NotifyCh:(chan<- bool)(0xc0032c4000), LogOutput:io.Writer(nil), LogLevel:\"DEBUG\", Logger:(*hclog.interceptLogger)(0xc003152090), NoSnapshotRestoreOnStart:true, skipStartup:false}"
2024-04-08T11:20:00.276Z [INFO]  storage.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dc1-vault-01 Address:dc1-vault-01:8201}]"
2024-04-08T11:20:00.277Z [INFO]  storage.raft: entering follower state: follower="Node at dc1-vault-01 [Follower]" leader-address= leader-id=
2024-04-08T11:20:07.169Z [WARN]  storage.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2024-04-08T11:20:07.170Z [INFO]  storage.raft: entering candidate state: node="Node at dc1-vault-01 [Candidate]" term=2
2024-04-08T11:20:07.172Z [DEBUG] storage.raft: voting for self: term=2 id=dc1-vault-01
2024-04-08T11:20:07.176Z [DEBUG] storage.raft: calculated votes needed: needed=1 term=2
2024-04-08T11:20:07.176Z [DEBUG] storage.raft: vote granted: from=dc1-vault-01 term=2 tally=1
2024-04-08T11:20:07.176Z [INFO]  storage.raft: election won: term=2 tally=1
2024-04-08T11:20:07.176Z [INFO]  storage.raft: entering leader state: leader="Node at dc1-vault-01 [Leader]"
2024-04-08T11:20:07.184Z [TRACE] storage.raft: finished setting up raft cluster
2024-04-08T11:20:07.185Z [DEBUG] core: finished bootstrapping raft backend
2024-04-08T11:20:07.195Z [INFO]  core: security barrier initialized: stored=1 shares=1 threshold=1
2024-04-08T11:20:07.199Z [TRACE] encrypted value using seal: seal=shamir keyId=""
2024-04-08T11:20:07.199Z [TRACE] successfully encrypted value: encryption seal wrappers=1 total enabled seal wrappers=1
2024-04-08T11:20:07.205Z [DEBUG] core: cluster name not found/set, generating new
2024-04-08T11:20:07.205Z [DEBUG] core: cluster name set: name=vault-cluster-bfe079d2
2024-04-08T11:20:07.205Z [DEBUG] core: cluster ID not found, generating new
2024-04-08T11:20:07.205Z [DEBUG] core: cluster ID set: id=b2ea86c1-54d7-1bf8-11c0-62401a38a8cb
2024-04-08T11:20:07.205Z [DEBUG] core: generating cluster private key
2024-04-08T11:20:07.211Z [DEBUG] core: generating local cluster certificate: host=fw-c89ba3fc-3b6a-18fc-02dc-7981c61d6778
2024-04-08T11:20:07.216Z [INFO]  core: post-unseal setup starting
2024-04-08T11:20:07.216Z [DEBUG] core: clearing forwarding clients
2024-04-08T11:20:07.216Z [DEBUG] core: done clearing forwarding clients
2024-04-08T11:20:07.216Z [DEBUG] core: persisting feature flags
2024-04-08T11:20:07.224Z [INFO]  core: loaded wrapping token key
2024-04-08T11:20:07.224Z [INFO]  core: successfully setup plugin runtime catalog
2024-04-08T11:20:07.224Z [INFO]  core: successfully setup plugin catalog: plugin-directory=""
2024-04-08T11:20:07.232Z [INFO]  core: no mounts; adding default mount table
2024-04-08T11:20:07.242Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:07.243Z [INFO]  core: successfully mounted: type=cubbyhole version="v1.16.0+builtin.vault" path=cubbyhole/ namespace="ID: root. Path: "
2024-04-08T11:20:07.245Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:07.246Z [INFO]  core: successfully mounted: type=system version="v1.16.0+builtin.vault" path=sys/ namespace="ID: root. Path: "
2024-04-08T11:20:07.249Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:07.249Z [INFO]  core: successfully mounted: type=identity version="v1.16.0+builtin.vault" path=identity/ namespace="ID: root. Path: "
2024-04-08T11:20:07.274Z [TRACE] token: no token generation counter found in storage
2024-04-08T11:20:07.274Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:07.274Z [INFO]  core: successfully mounted: type=token version="v1.16.0+builtin.vault" path=token/ namespace="ID: root. Path: "
2024-04-08T11:20:07.278Z [INFO]  rollback: Starting the rollback manager with 256 workers
2024-04-08T11:20:07.279Z [INFO]  rollback: starting rollback manager
2024-04-08T11:20:07.280Z [TRACE] expiration.job-manager: created dispatcher: name=expire-dispatcher num_workers=200
2024-04-08T11:20:07.280Z [TRACE] expiration.job-manager: initialized dispatcher: num_workers=200
2024-04-08T11:20:07.280Z [TRACE] expiration.job-manager: created job manager: name=expire pool_size=200
2024-04-08T11:20:07.280Z [TRACE] expiration.job-manager: starting job manager: name=expire
2024-04-08T11:20:07.280Z [TRACE] expiration.job-manager: starting dispatcher
2024-04-08T11:20:07.280Z [INFO]  core: restoring leases
2024-04-08T11:20:07.280Z [DEBUG] expiration: collecting leases
2024-04-08T11:20:07.283Z [DEBUG] expiration: leases collected: num_existing=0
2024-04-08T11:20:07.283Z [INFO]  expiration: lease restore complete
2024-04-08T11:20:07.287Z [DEBUG] identity: loading entities
2024-04-08T11:20:07.287Z [DEBUG] identity: entities collected: num_existing=0
2024-04-08T11:20:07.287Z [INFO]  identity: entities restored
2024-04-08T11:20:07.288Z [DEBUG] identity: identity loading groups
2024-04-08T11:20:07.288Z [DEBUG] identity: groups collected: num_existing=0
2024-04-08T11:20:07.288Z [INFO]  identity: groups restored
2024-04-08T11:20:07.288Z [DEBUG] identity: identity loading OIDC clients
2024-04-08T11:20:07.288Z [TRACE] mfa: loading login MFA configurations
2024-04-08T11:20:07.288Z [TRACE] mfa: methods collected: num_existing=0
2024-04-08T11:20:07.288Z [TRACE] mfa: configurations restored: namespace="" prefix=login-mfa/method/
2024-04-08T11:20:07.288Z [TRACE] mfa: loading login MFA enforcement configurations
2024-04-08T11:20:07.288Z [TRACE] mfa: enforcements configs collected: num_existing=0
2024-04-08T11:20:07.288Z [TRACE] mfa: enforcement configurations restored: namespace="" prefix=login-mfa/enforcement/
2024-04-08T11:20:07.289Z [TRACE] activity: scanned existing logs: out=[]
2024-04-08T11:20:07.289Z [TRACE] activity: scanned existing logs: out=[]
2024-04-08T11:20:07.292Z [TRACE] activity: no intent log found
2024-04-08T11:20:07.292Z [INFO]  core: usage gauge collection is disabled
2024-04-08T11:20:07.294Z [INFO]  core: Recorded vault version: vault version=1.16.0 upgrade time="2024-04-08 11:20:07.289209735 +0000 UTC" build date=2024-03-25T12:01:32Z
2024-04-08T11:20:07.297Z [DEBUG] secrets.identity.identity_7f3f2128: wrote OIDC default provider
2024-04-08T11:20:07.300Z [DEBUG] secrets.identity.identity_7f3f2128: wrote OIDC default key
2024-04-08T11:20:07.303Z [DEBUG] secrets.identity.identity_7f3f2128: wrote OIDC allow_all assignment
2024-04-08T11:20:07.303Z [INFO]  core: post-unseal setup complete
2024-04-08T11:20:07.312Z [DEBUG] token: no wal state found when generating token
2024-04-08T11:20:07.312Z [INFO]  core: root token generated
2024-04-08T11:20:07.318Z [INFO]  core: pre-seal teardown starting
2024-04-08T11:20:07.318Z [INFO]  core: stopping raft active node
2024-04-08T11:20:07.318Z [DEBUG] expiration: stop triggered
2024-04-08T11:20:07.318Z [TRACE] expiration.job-manager: terminating job manager...
2024-04-08T11:20:07.318Z [TRACE] expiration.job-manager: terminating dispatcher
2024-04-08T11:20:07.318Z [DEBUG] expiration: finished stopping
2024-04-08T11:20:07.319Z [INFO]  rollback: stopping rollback manager
2024-04-08T11:20:07.319Z [INFO]  core: pre-seal teardown complete
2024-04-08T11:20:20.250Z [DEBUG] core: unseal key supplied: migrate=false
2024-04-08T11:20:20.250Z [TRACE] decrypted value using seal: seal_name=shamir
2024-04-08T11:20:20.251Z [DEBUG] core: starting cluster listeners
2024-04-08T11:20:20.251Z [INFO]  core.cluster-listener.tcp: starting listener: listener_address=0.0.0.0:8201
2024-04-08T11:20:20.251Z [INFO]  core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201
2024-04-08T11:20:20.254Z [TRACE] storage.raft: setting up raft cluster
2024-04-08T11:20:20.254Z [TRACE] storage.raft: applying raft config: inputs="map[node_id:dc1-vault-01 path:/vault/file]"
2024-04-08T11:20:20.254Z [TRACE] storage.raft: using larger timeouts for raft at startup: initial_election_timeout=15s initial_heartbeat_timeout=15s normal_election_timeout=5s normal_heartbeat_timeout=5s
2024-04-08T11:20:20.254Z [INFO]  storage.raft: creating Raft: config="&raft.Config{ProtocolVersion:3, HeartbeatTimeout:15000000000, ElectionTimeout:15000000000, CommitTimeout:50000000, MaxAppendEntries:64, BatchApplyCh:true, ShutdownOnRemove:true, TrailingLogs:0x2800, SnapshotInterval:120000000000, SnapshotThreshold:0x2000, LeaderLeaseTimeout:2500000000, LocalID:\"dc1-vault-01\", NotifyCh:(chan<- bool)(0xc0032c5730), LogOutput:io.Writer(nil), LogLevel:\"DEBUG\", Logger:(*hclog.interceptLogger)(0xc003152090), NoSnapshotRestoreOnStart:true, skipStartup:false}"
2024-04-08T11:20:20.256Z [INFO]  storage.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dc1-vault-01 Address:dc1-vault-01:8201}]"
2024-04-08T11:20:20.257Z [TRACE] storage.raft: reloaded raft config to set lower timeouts: config="raft.ReloadableConfig{TrailingLogs:0x2800, SnapshotInterval:120000000000, SnapshotThreshold:0x2000, HeartbeatTimeout:5000000000, ElectionTimeout:5000000000}"
2024-04-08T11:20:20.257Z [TRACE] storage.raft: finished setting up raft cluster
2024-04-08T11:20:20.257Z [INFO]  core: vault is unsealed
2024-04-08T11:20:20.257Z [INFO]  storage.raft: entering follower state: follower="Node at dc1-vault-01:8201 [Follower]" leader-address= leader-id=
2024-04-08T11:20:20.257Z [WARN]  storage.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2024-04-08T11:20:20.257Z [INFO]  storage.raft: entering candidate state: node="Node at dc1-vault-01:8201 [Candidate]" term=3
2024-04-08T11:20:20.258Z [INFO]  core: entering standby mode
2024-04-08T11:20:20.263Z [DEBUG] storage.raft: voting for self: term=3 id=dc1-vault-01
2024-04-08T11:20:20.267Z [DEBUG] storage.raft: calculated votes needed: needed=1 term=3
2024-04-08T11:20:20.267Z [DEBUG] storage.raft: vote granted: from=dc1-vault-01 term=3 tally=1
2024-04-08T11:20:20.267Z [INFO]  storage.raft: election won: term=3 tally=1
2024-04-08T11:20:20.267Z [INFO]  storage.raft: entering leader state: leader="Node at dc1-vault-01:8201 [Leader]"
2024-04-08T11:20:20.277Z [INFO]  core: acquired lock, enabling active operation
2024-04-08T11:20:20.277Z [INFO]  seal configuration was not reloaded
2024-04-08T11:20:20.278Z [DEBUG] core: generating cluster private key
2024-04-08T11:20:20.279Z [DEBUG] core: generating local cluster certificate: host=fw-97bbcb47-a729-2f75-c547-f2640258f811
2024-04-08T11:20:20.285Z [INFO]  core: post-unseal setup starting
2024-04-08T11:20:20.285Z [DEBUG] core: clearing forwarding clients
2024-04-08T11:20:20.285Z [DEBUG] core: done clearing forwarding clients
2024-04-08T11:20:20.285Z [DEBUG] core: persisting feature flags
2024-04-08T11:20:20.292Z [INFO]  core: loaded wrapping token key
2024-04-08T11:20:20.292Z [INFO]  core: successfully setup plugin runtime catalog
2024-04-08T11:20:20.292Z [INFO]  core: successfully setup plugin catalog: plugin-directory=""
2024-04-08T11:20:20.301Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:20.301Z [INFO]  core: successfully mounted: type=system version="v1.16.0+builtin.vault" path=sys/ namespace="ID: root. Path: "
2024-04-08T11:20:20.302Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:20.302Z [INFO]  core: successfully mounted: type=identity version="v1.16.0+builtin.vault" path=identity/ namespace="ID: root. Path: "
2024-04-08T11:20:20.302Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:20.302Z [INFO]  core: successfully mounted: type=cubbyhole version="v1.16.0+builtin.vault" path=cubbyhole/ namespace="ID: root. Path: "
2024-04-08T11:20:20.324Z [TRACE] token: no token generation counter found in storage
2024-04-08T11:20:20.324Z [TRACE] core: adding write forwarded paths: paths=[]
2024-04-08T11:20:20.324Z [INFO]  core: successfully mounted: type=token version="v1.16.0+builtin.vault" path=token/ namespace="ID: root. Path: "
2024-04-08T11:20:20.327Z [INFO]  rollback: Starting the rollback manager with 256 workers
2024-04-08T11:20:20.327Z [TRACE] expiration.job-manager: created dispatcher: name=expire-dispatcher num_workers=200
2024-04-08T11:20:20.327Z [TRACE] expiration.job-manager: initialized dispatcher: num_workers=200
2024-04-08T11:20:20.327Z [TRACE] expiration.job-manager: created job manager: name=expire pool_size=200
2024-04-08T11:20:20.327Z [TRACE] expiration.job-manager: starting job manager: name=expire
2024-04-08T11:20:20.327Z [TRACE] expiration.job-manager: starting dispatcher
2024-04-08T11:20:20.328Z [INFO]  core: restoring leases
2024-04-08T11:20:20.328Z [INFO]  rollback: starting rollback manager
2024-04-08T11:20:20.328Z [DEBUG] expiration: collecting leases
2024-04-08T11:20:20.328Z [DEBUG] expiration: leases collected: num_existing=0
2024-04-08T11:20:20.329Z [INFO]  expiration: lease restore complete
2024-04-08T11:20:20.336Z [DEBUG] identity: loading entities
2024-04-08T11:20:20.336Z [DEBUG] identity: entities collected: num_existing=0
2024-04-08T11:20:20.336Z [INFO]  identity: entities restored
2024-04-08T11:20:20.336Z [DEBUG] identity: identity loading groups
2024-04-08T11:20:20.336Z [DEBUG] identity: groups collected: num_existing=0
2024-04-08T11:20:20.336Z [INFO]  identity: groups restored
2024-04-08T11:20:20.336Z [DEBUG] identity: identity loading OIDC clients
2024-04-08T11:20:20.336Z [TRACE] mfa: loading login MFA configurations
2024-04-08T11:20:20.336Z [TRACE] mfa: methods collected: num_existing=0
2024-04-08T11:20:20.336Z [TRACE] mfa: configurations restored: namespace="" prefix=login-mfa/method/
2024-04-08T11:20:20.336Z [TRACE] mfa: loading login MFA enforcement configurations
2024-04-08T11:20:20.336Z [TRACE] mfa: enforcements configs collected: num_existing=0
2024-04-08T11:20:20.336Z [TRACE] mfa: enforcement configurations restored: namespace="" prefix=login-mfa/enforcement/
2024-04-08T11:20:20.336Z [TRACE] activity: scanned existing logs: out=[]
2024-04-08T11:20:20.336Z [INFO]  core: starting raft active node
2024-04-08T11:20:20.336Z [TRACE] activity: no intent log found
2024-04-08T11:20:20.336Z [TRACE] activity: scanned existing logs: out=[]
2024-04-08T11:20:20.340Z [DEBUG] core.undo-log-watcher: starting undo log watcher
2024-04-08T11:20:20.341Z [DEBUG] core.undo-log-watcher: undo logs have not been enabled yet, possibly due to a recent upgrade. starting a periodic check
2024-04-08T11:20:20.346Z [INFO]  storage.raft: starting autopilot: config="&{false 0 10s 24h0m0s 1000 0 10s false redundancy_zone upgrade_version}" reconcile_interval=0s
2024-04-08T11:20:20.346Z [DEBUG] core: request forwarding setup function
2024-04-08T11:20:20.346Z [DEBUG] core: clearing forwarding clients
2024-04-08T11:20:20.346Z [DEBUG] core: done clearing forwarding clients
2024-04-08T11:20:20.346Z [DEBUG] core: leaving request forwarding setup function
2024-04-08T11:20:20.347Z [INFO]  core: usage gauge collection is disabled
2024-04-08T11:20:20.347Z [DEBUG] storage.raft.autopilot: autopilot is now running
2024-04-08T11:20:20.347Z [DEBUG] storage.raft.autopilot: state update routine is now running
2024-04-08T11:20:20.358Z [INFO]  core: post-unseal setup complete
2024-04-08T11:20:21.351Z [DEBUG] core.undo-log-watcher: undo logs can be safely enabled now
2024-04-08T11:20:21.351Z [DEBUG] core.undo-log-watcher: undo logs have been enabled and this has been persisted to storage. shutting down the checker.
2024-04-08T11:27:21.523Z [INFO]  http: TLS handshake error from 172.19.0.3:39966: remote error: tls: bad certificate
2024-04-08T11:28:25.635Z [TRACE] encrypted value using seal: seal=shamir keyId=""
2024-04-08T11:28:25.635Z [TRACE] successfully encrypted value: encryption seal wrappers=1 total enabled seal wrappers=1
2024-04-08T11:29:22.715Z [INFO]  http: TLS handshake error from 127.0.0.1:49832: remote error: tls: bad certificate
2024-04-08T11:30:20.341Z [TRACE] activity: writing segment on timer expiration
2024-04-08T11:30:59.788Z [TRACE] storage.raft: adding server to raft via autopilot: id=dc1-vault-02 addr=dc1-vault-02:8201
2024-04-08T11:30:59.788Z [INFO]  storage.raft: updating configuration: command=AddNonvoter server-id=dc1-vault-02 server-addr=dc1-vault-02:8201 servers="[{Suffrage:Voter ID:dc1-vault-01 Address:dc1-vault-01:8201} {Suffrage:Nonvoter ID:dc1-vault-02 Address:dc1-vault-02:8201}]"
2024-04-08T11:30:59.790Z [INFO]  storage.raft: added peer, starting replication: peer=dc1-vault-02
2024-04-08T11:30:59.791Z [DEBUG] core.cluster-listener: creating rpc dialer: address=dc1-vault-02:8201 alpn=raft_storage_v1 host=raft-493f8c7f-2492-c019-0986-64ab7b3ffa05
2024-04-08T11:30:59.793Z [INFO]  system: follower node answered the raft bootstrap challenge: follower_server_id=dc1-vault-02
2024-04-08T11:30:59.795Z [ERROR] storage.raft: failed to appendEntries to: peer="{Nonvoter dc1-vault-02 dc1-vault-02:8201}" error="dial tcp 172.19.0.3:8201: connect: connection refused"
2024-04-08T11:30:59.874Z [DEBUG] core.cluster-listener: creating rpc dialer: address=dc1-vault-02:8201 alpn=raft_storage_v1 host=raft-493f8c7f-2492-c019-0986-64ab7b3ffa05
2024-04-08T11:30:59.882Z [DEBUG] core.cluster-listener: performing client cert lookup
2024-04-08T11:30:59.888Z [WARN]  storage.raft: appendEntries rejected, sending older logs: peer="{Nonvoter dc1-vault-02 dc1-vault-02:8201}" next=2
2024-04-08T11:30:59.895Z [INFO]  storage.raft: pipelining replication: peer="{Nonvoter dc1-vault-02 dc1-vault-02:8201}"
2024-04-08T11:31:00.347Z [TRACE] storage.raft: received empty Vault version in heartbeat state. faking it with the leader version for now: id=dc1-vault-02 leader version=1.16.0
2024-04-08T11:31:00.358Z [DEBUG] core.cluster-listener: creating rpc dialer: address=dc1-vault-02:8201 alpn=raft_storage_v1 host=raft-493f8c7f-2492-c019-0986-64ab7b3ffa05
2024-04-08T11:31:00.362Z [DEBUG] core.cluster-listener: performing client cert lookup
2024-04-08T11:31:03.322Z [DEBUG] core.cluster-listener: performing server cert lookup
2024-04-08T11:31:03.327Z [DEBUG] core.request-forward: got request forwarding connection
2024-04-08T11:31:10.348Z [INFO]  storage.raft.autopilot: Promoting server: id=dc1-vault-02 address=dc1-vault-02:8201 name=dc1-vault-02
2024-04-08T11:31:10.348Z [INFO]  storage.raft: updating configuration: command=AddVoter server-id=dc1-vault-02 server-addr=dc1-vault-02:8201 servers="[{Suffrage:Voter ID:dc1-vault-01 Address:dc1-vault-01:8201} {Suffrage:Voter ID:dc1-vault-02 Address:dc1-vault-02:8201}]"

dc1-vault-02 logs:

Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --cap-add IPC_LOCK
==> Vault server configuration:

Administrative Namespace:
             Api Address: https://dc1-vault-02:8200
                     Cgo: disabled
         Cluster Address: https://dc1-vault-02:8201
   Environment Variables: HOME, HOSTNAME, NAME, PATH, PWD, SHLVL, VERSION
              Go Version: go1.21.8
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", disable_request_limiter: "false", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: trace
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: raft (HA available)
                 Version: Vault v1.16.0, built 2024-03-25T12:01:32Z
             Version Sha: c20eae3e84c55bf5180ac890b83ee81c9d7ded8b

==> Vault server started! Log data will stream in below:

2024-04-08T11:17:28.950Z [INFO]  proxy environment: http_proxy="" https_proxy="" no_proxy=""
2024-04-08T11:17:28.953Z [DEBUG] storage.raft.fsm: time to open database: elapsed=2.130617ms path=/vault/file/vault.db
2024-04-08T11:17:28.962Z [INFO]  incrementing seal generation: generation=1
2024-04-08T11:17:28.964Z [DEBUG] core: set config: sanitized config="{\"administrative_namespace_path\":\"\",\"api_addr\":\"https://dc1-vault-02:8200\",\"cache_size\":0,\"cluster_addr\":\"https://dc1-vault-02:8201\",\"cluster_cipher_suites\":\"\",\"cluster_name\":\"\",\"default_lease_ttl\":0,\"default_max_request_duration\":0,\"detect_deadlocks\":\"\",\"disable_cache\":false,\"disable_clustering\":false,\"disable_indexing\":false,\"disable_mlock\":true,\"disable_performance_standby\":false,\"disable_printable_check\":false,\"disable_sealwrap\":false,\"disable_sentinel_trace\":false,\"enable_response_header_hostname\":false,\"enable_response_header_raft_node_id\":false,\"enable_ui\":true,\"experiments\":null,\"imprecise_lease_role_tracking\":false,\"introspection_endpoint\":false,\"listeners\":[{\"config\":{\"address\":\"0.0.0.0:8200\",\"tls_cert_file\":\"/vault/config/vault-cert.pem\",\"tls_client_ca_file\":\"/vault/config/vault-cert.pem\",\"tls_key_file\":\"/vault/config/vault-key.pem\"},\"type\":\"tcp\"}],\"log_format\":\"\",\"log_level\":\"trace\",\"log_requests_level\":\"\",\"max_lease_ttl\":0,\"pid_file\":\"\",\"plugin_directory\":\"\",\"plugin_file_permissions\":0,\"plugin_file_uid\":0,\"plugin_tmpdir\":\"\",\"raw_storage_endpoint\":false,\"seals\":[{\"disabled\":false,\"name\":\"shamir\",\"priority\":1,\"type\":\"shamir\"}],\"storage\":{\"cluster_addr\":\"https://dc1-vault-02:8201\",\"disable_clustering\":false,\"raft\":{\"max_entry_size\":\"\"},\"redirect_addr\":\"https://dc1-vault-02:8200\",\"type\":\"raft\"}}"
2024-04-08T11:17:28.964Z [DEBUG] storage.cache: creating LRU cache: size=0
2024-04-08T11:17:28.967Z [INFO]  core: Initializing version history cache for core
2024-04-08T11:17:28.967Z [INFO]  events: Starting event system
2024-04-08T11:17:28.968Z [DEBUG] cluster listener addresses synthesized: cluster_addresses=[0.0.0.0:8201]
2024-04-08T11:17:28.970Z [DEBUG] would have sent systemd notification (systemd not present): notification=READY=1
2024-04-08T11:28:25.630Z [INFO]  core: security barrier not initialized
2024-04-08T11:28:25.630Z [INFO]  core: security barrier not initialized
2024-04-08T11:28:25.632Z [INFO]  core: attempting to join possible raft leader node: leader_addr=https://dc1-vault-01:8200
2024-04-08T11:30:59.777Z [DEBUG] core: unseal key supplied: migrate=false
2024-04-08T11:30:59.777Z [INFO]  core: security barrier not initialized
2024-04-08T11:30:59.777Z [TRACE] decrypted value using seal: seal_name=shamir
2024-04-08T11:30:59.795Z [DEBUG] core: starting cluster listeners
2024-04-08T11:30:59.796Z [INFO]  core.cluster-listener.tcp: starting listener: listener_address=0.0.0.0:8201
2024-04-08T11:30:59.796Z [INFO]  core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201
2024-04-08T11:30:59.798Z [TRACE] storage.raft: setting up raft cluster
2024-04-08T11:30:59.798Z [TRACE] storage.raft: applying raft config: inputs="map[node_id:dc1-vault-02 path:/vault/file]"
2024-04-08T11:30:59.798Z [TRACE] storage.raft: using larger timeouts for raft at startup: initial_election_timeout=15s initial_heartbeat_timeout=15s normal_election_timeout=5s normal_heartbeat_timeout=5s
2024-04-08T11:30:59.802Z [INFO]  storage.raft: creating Raft: config="&raft.Config{ProtocolVersion:3, HeartbeatTimeout:15000000000, ElectionTimeout:15000000000, CommitTimeout:50000000, MaxAppendEntries:64, BatchApplyCh:true, ShutdownOnRemove:true, TrailingLogs:0x2800, SnapshotInterval:120000000000, SnapshotThreshold:0x2000, LeaderLeaseTimeout:2500000000, LocalID:\"dc1-vault-02\", NotifyCh:(chan<- bool)(0xc0031ab110), LogOutput:io.Writer(nil), LogLevel:\"DEBUG\", Logger:(*hclog.interceptLogger)(0xc002e65470), NoSnapshotRestoreOnStart:true, skipStartup:false}"
2024-04-08T11:30:59.804Z [INFO]  storage.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dc1-vault-01 Address:dc1-vault-01:8201} {Suffrage:Nonvoter ID:dc1-vault-02 Address:dc1-vault-02:8201}]"
2024-04-08T11:30:59.806Z [TRACE] storage.raft: finished setting up raft cluster
2024-04-08T11:30:59.807Z [INFO]  storage.raft: entering follower state: follower="Node at dc1-vault-02:8201 [Follower]" leader-address= leader-id=
2024-04-08T11:30:59.807Z [INFO]  core: security barrier not initialized
2024-04-08T11:30:59.876Z [DEBUG] core.cluster-listener: performing server cert lookup
2024-04-08T11:30:59.884Z [DEBUG] storage.raft.raft-net: accepted connection: local-address=dc1-vault-02:8201 remote-address=172.19.0.2:45322
2024-04-08T11:30:59.887Z [WARN]  storage.raft: failed to get previous log: previous-index=67 last-index=1 error="log not found"
2024-04-08T11:31:00.359Z [DEBUG] core.cluster-listener: performing server cert lookup
2024-04-08T11:31:00.364Z [DEBUG] storage.raft.raft-net: accepted connection: local-address=dc1-vault-02:8201 remote-address=172.19.0.2:45326
2024-04-08T11:31:00.808Z [TRACE] decrypted value using seal: seal_name=shamir
2024-04-08T11:31:00.814Z [WARN]  core: cluster listener is already started
2024-04-08T11:31:00.814Z [INFO]  core: vault is unsealed
2024-04-08T11:31:00.814Z [INFO]  core: entering standby mode
2024-04-08T11:31:03.317Z [TRACE] core: found new active node information, refreshing
2024-04-08T11:31:03.317Z [DEBUG] core: parsing information for new active node: active_cluster_addr=https://dc1-vault-01:8201 active_redirect_addr=https://dc1-vault-01:8200
2024-04-08T11:31:03.317Z [DEBUG] core: refreshing forwarding connection: clusterAddr=https://dc1-vault-01:8201
2024-04-08T11:31:03.317Z [DEBUG] core: clearing forwarding clients
2024-04-08T11:31:03.317Z [DEBUG] core: done clearing forwarding clients
2024-04-08T11:31:03.320Z [DEBUG] core: done refreshing forwarding connection: clusterAddr=https://dc1-vault-01:8201
2024-04-08T11:31:03.320Z [DEBUG] core.cluster-listener: creating rpc dialer: address=dc1-vault-01:8201 alpn=req_fw_sb-act_v1 host=fw-97bbcb47-a729-2f75-c547-f2640258f811
2024-04-08T11:31:03.325Z [DEBUG] core.cluster-listener: performing client cert lookup
2024-04-08T11:31:26.367Z [TRACE] storage.raft: triggering raft config reload due to initial timeout
2024-04-08T11:31:26.367Z [TRACE] storage.raft: reloaded raft config to set lower timeouts: config="raft.ReloadableConfig{TrailingLogs:0x2800, SnapshotInterval:120000000000, SnapshotThreshold:0x2000, HeartbeatTimeout:5000000000, ElectionTimeout:5000000000}"

dc1-vault-03 logs:

Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --cap-add IPC_LOCK
==> Vault server configuration:

Administrative Namespace:
             Api Address: https://dc1-vault-03:8200
                     Cgo: disabled
         Cluster Address: https://dc1-vault-03:8201
   Environment Variables: HOME, HOSTNAME, NAME, PATH, PWD, SHLVL, VERSION
              Go Version: go1.21.8
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", disable_request_limiter: "false", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: trace
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: raft (HA available)
                 Version: Vault v1.16.0, built 2024-03-25T12:01:32Z
             Version Sha: c20eae3e84c55bf5180ac890b83ee81c9d7ded8b

==> Vault server started! Log data will stream in below:

2024-04-08T11:18:46.608Z [INFO]  proxy environment: http_proxy="" https_proxy="" no_proxy=""
2024-04-08T11:18:46.615Z [DEBUG] storage.raft.fsm: time to open database: elapsed=7.474736ms path=/vault/file/vault.db
2024-04-08T11:18:46.626Z [INFO]  incrementing seal generation: generation=1
2024-04-08T11:18:46.628Z [DEBUG] core: set config: sanitized config="{\"administrative_namespace_path\":\"\",\"api_addr\":\"https://dc1-vault-03:8200\",\"cache_size\":0,\"cluster_addr\":\"https://dc1-vault-03:8201\",\"cluster_cipher_suites\":\"\",\"cluster_name\":\"\",\"default_lease_ttl\":0,\"default_max_request_duration\":0,\"detect_deadlocks\":\"\",\"disable_cache\":false,\"disable_clustering\":false,\"disable_indexing\":false,\"disable_mlock\":true,\"disable_performance_standby\":false,\"disable_printable_check\":false,\"disable_sealwrap\":false,\"disable_sentinel_trace\":false,\"enable_response_header_hostname\":false,\"enable_response_header_raft_node_id\":false,\"enable_ui\":true,\"experiments\":null,\"imprecise_lease_role_tracking\":false,\"introspection_endpoint\":false,\"listeners\":[{\"config\":{\"address\":\"0.0.0.0:8200\",\"tls_cert_file\":\"/vault/config/vault-cert.pem\",\"tls_client_ca_file\":\"/vault/config/vault-cert.pem\",\"tls_key_file\":\"/vault/config/vault-key.pem\"},\"type\":\"tcp\"}],\"log_format\":\"\",\"log_level\":\"trace\",\"log_requests_level\":\"\",\"max_lease_ttl\":0,\"pid_file\":\"\",\"plugin_directory\":\"\",\"plugin_file_permissions\":0,\"plugin_file_uid\":0,\"plugin_tmpdir\":\"\",\"raw_storage_endpoint\":false,\"seals\":[{\"disabled\":false,\"name\":\"shamir\",\"priority\":1,\"type\":\"shamir\"}],\"storage\":{\"cluster_addr\":\"https://dc1-vault-03:8201\",\"disable_clustering\":false,\"raft\":{\"max_entry_size\":\"\"},\"redirect_addr\":\"https://dc1-vault-03:8200\",\"type\":\"raft\"}}"
2024-04-08T11:18:46.628Z [DEBUG] storage.cache: creating LRU cache: size=0
2024-04-08T11:18:46.630Z [INFO]  core: Initializing version history cache for core
2024-04-08T11:18:46.630Z [INFO]  events: Starting event system
2024-04-08T11:18:46.632Z [DEBUG] cluster listener addresses synthesized: cluster_addresses=[0.0.0.0:8201]
2024-04-08T11:18:46.635Z [DEBUG] would have sent systemd notification (systemd not present): notification=READY=1
@ramnarayanp
Copy link

Node1 shows some tls handshake errors.
Could you review your certs? Specifically the SAN entries?

@person50002
Copy link
Author

Node1 shows some tls handshake errors. Could you review your certs? Specifically the SAN entries?

Are you referring to the following log?

2024-04-08T11:29:22.715Z [INFO]  http: TLS handshake error from 127.0.0.1:49832: remote error: tls: bad certificate

I believe this resulted from me trying to to join node 2 without the "-leader-ca-cert=@/vault/config/vault-cert.pem". I then added the option and the join worked, but only manually.

@person50002
Copy link
Author

I have also noticed that vault does not write anything to the /vault/file directory, which means that data is not being persisted.

@hsimon-hashicorp
Copy link
Contributor

For my own reference, did this work prior to Vault 1.16.0? Thanks!

@hsimon-hashicorp hsimon-hashicorp added core/ha specific to high-availability bug Used to indicate a potential bug labels Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to indicate a potential bug core/ha specific to high-availability
Projects
None yet
Development

No branches or pull requests

3 participants