You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if a node was initially synced with state sync and state sync is left enabled in node's configuration, a node may silently hang forever after node's restart.
Here is an example log:
{"caller":"common.go:107","level":"debug","module":"oasis-node","msg":"common initialization complete","ts":"2021-10-21T13:08:44.880920584Z"}
{"Version":"21.3.3","caller":"node.go:565","level":"info","module":"oasis-node","msg":"Starting oasis-node","ts":"2021-10-21T13:08:44.882599984Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"First key=\"upgrade.descriptors\\xff\\xff\\xff\\xff\\xff\\xff\\xbc\\xe9\"","ts":"2021-10-21T13:08:44.910470083Z"}
{"caller":"helpers.go:49","level":"info","module":"common/persistent","msg":"All 2 tables opened in 2ms","ts":"2021-10-21T13:08:44.924783335Z"}
{"caller":"helpers.go:49","level":"info","module":"common/persistent","msg":"Discard stats nextEmptySlot: 0","ts":"2021-10-21T13:08:44.931382653Z"}
{"caller":"helpers.go:49","level":"info","module":"common/persistent","msg":"Set nextTxnTs to 17174","ts":"2021-10-21T13:08:44.931850901Z"}
{"caller":"helpers.go:49","level":"info","module":"common/persistent","msg":"Deleting empty file: /srv/oasis/node/persistent-store.badger.db/000134.vlog","ts":"2021-10-21T13:08:44.93228416Z"}
{"caller":"upgrade.go:153","level":"debug","module":"upgrade","msg":"no pending descriptors, continuing startup","ts":"2021-10-21T13:08:44.946262556Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"writeRequests called. Writing to value log","ts":"2021-10-21T13:08:44.946783498Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"Sending updates to subscribers","ts":"2021-10-21T13:08:44.955785744Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"Writing to memtable","ts":"2021-10-21T13:08:44.956308364Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"2 entries written","ts":"2021-10-21T13:08:44.960283204Z"}
{"caller":"node.go:618","consensus_pk":"LDHAd81PaK3DzyKjxVKUFTmH3ghBKMYdoftVsK0Bsp4=","level":"info","module":"oasis-node","msg":"loaded/generated node identity","node_pk":"HG5TB3dbY8gtYBBw/R/cHfPaOpe0vT7W1wu/2rtpk/A=","p2p_pk":"AY6IP5IHQrHWWtd3jpI8/SrkYR9Cfpjl33tiUrTrM10=","tls_pk":"lAdq5iPTJ881Wd/vHmg6AXJu0V8MmwTTnJrAcR62hKA=","ts":"2021-10-21T13:08:44.967420767Z"}
{"caller":"node.go:679","level":"info","module":"oasis-node","msg":"starting Oasis node","ts":"2021-10-21T13:08:45.028443179Z"}
{"caller":"full.go:1516","level":"info","module":"tendermint","msg":"starting a full consensus node","ts":"2021-10-21T13:08:45.029895619Z"}
{"caller":"helpers.go:49","level":"info","module":"mkvs/db/badger","msg":"All 54 tables opened in 9ms","ts":"2021-10-21T13:08:45.062174937Z"}
{"caller":"helpers.go:49","level":"info","module":"mkvs/db/badger","msg":"Discard stats nextEmptySlot: 0","ts":"2021-10-21T13:08:45.072639594Z"}
{"caller":"helpers.go:49","level":"info","module":"mkvs/db/badger","msg":"Set nextTxnTs to 6417758","ts":"2021-10-21T13:08:45.073101558Z"}
{"caller":"helpers.go:49","level":"info","module":"mkvs/db/badger","msg":"Deleting empty file: /srv/oasis/node/tendermint/abci-state/mkvs_storage.badger.db/000134.vlog","ts":"2021-10-21T13:08:45.074048826Z"}
{"caller":"prune.go:226","level":"debug","module":"abci-mux/pruner","msg":"ABCI state pruner initialized","num_kept":3600,"strategy":"none","ts":"2021-10-21T13:08:45.075774681Z"}
{"block_hash":"9bb6188bec4466da480a3e00985ddb0596b474e9dc1e0f43ada05a08b4c380b8","block_height":6417756,"caller":"mux.go:1181","level":"debug","module":"abci-mux","msg":"ABCI multiplexer initialized","ts":"2021-10-21T13:08:45.076610923Z"}
{"caller":"checkpointer.go:276","check_interval":"1m0s","level":"debug","module":"storage/mkvs/checkpoint/consensus","msg":"storage checkpointer started","namespace":"0000000000000000000000000000000000000000000000000000000000000000","ts":"2021-10-21T13:08:45.08472734Z"}
{"caller":"full.go:1243","level":"info","module":"tendermint","msg":"state sync enabled","ts":"2021-10-21T13:08:45.349979236Z"}
{"caller":"client.go:200","level":"info","module":"tendermint:base","msg":"Downloading trusted light block using options","ts":"2021-10-21T13:08:45.377107503Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"No file with discard stats","ts":"2021-10-21T13:13:44.959526534Z"}
{"caller":"helpers.go:53","level":"debug","module":"mkvs/db/badger","msg":"No file with discard stats","ts":"2021-10-21T13:13:45.085001637Z"}
{"caller":"helpers.go:53","level":"debug","module":"common/persistent","msg":"No file with discard stats","ts":"2021-10-21T13:18:44.96032654Z"}
{"caller":"helpers.go:53","level":"debug","module":"mkvs/db/badger","msg":"No file with discard stats","ts":"2021-10-21T13:18:45.085777945Z"}
... trimmed ...
... nothing else besides "No file with discard stats" messages ...
The expected thing would be that the node would know that it has already been synced and disable state sync automatically.
The text was updated successfully, but these errors were encountered:
SUMMARY
Currently, if a node was initially synced with state sync and state sync is left enabled in node's configuration, a node may silently hang forever after node's restart.
Here is an example log:
The expected thing would be that the node would know that it has already been synced and disable state sync automatically.
The text was updated successfully, but these errors were encountered: