Skip to content

Latest commit

 

History

History

monitoring

tBTC v2 monitoring

Monitoring build status

This package provides a monitoring tool for the tBTC v2 system events. This component is part of a broader monitoring and telemetry system.

How it works?

Architecture

The high-level architecture of the monitoring tool looks as follows:

       +------------------+
       |                  |
       |  tBTC v2 system  |
       |                  |
       +------------------+
               ^  ^
               |  |
        +------+  +------+
        |                |
        |                |
 +------+------+  +------+------+
 |             |  |             |
 |  Monitor 1  |  |  Monitor 2  |
 |             |  |             |
 +------+------+  +------+------+
        |                |
        |                |
        +------+  +------+
               |  |
               v  v
          +------------+      +---------------+
          |            |      |               |
          |  Manager   +----->|  Persistence  |
          |            |      |               |
          +----+--+----+      +---------------+
               |  |
               |  |
        +------+  +------+
        |                |
        v                v
+--------------+  +--------------+
|              |  |              |
|  Receiver 1  |  |  Receiver 2  |
|              |  |              |
+--------------+  +--------------+

Specific components are:

  • tBTC v2 system: On-chain smart contracts

  • Monitors: Components that observe the tBTC v2 system and generate appropriate system events according to their internal logic

  • Manager: Main component that manages the given monitoring run. Gathers the system events generated by the Monitors and dispatches them to registered Receivers

  • Persistence: Component that persists Manager's processing data

  • Receivers: Components that receive system events from the Manager and decide about handling or ignoring them

Such a structure gives great flexibility and allows to easily extend the monitoring tool as new Monitors and Receivers can be implemented without hassle and the Persistence component can be swapped anytime.

The flow

The monitoring tool is meant to be used in a script-like way, i.e. triggered in a loop with fixed delays between subsequent runs. A single run always focuses on a specific chain block window. The window’s starting block is determined based on the previous run’s end block held by the Persistence component (the so-called checkpoint block). The starting block is also decremented by a constant factor of 12 to cover possible chain reorganizations. The window’s ending block is always the latest mined block (i.e. the chain tip). For example, assuming that initial checkpoint block is 100, the loop will look as follows:

  • Run 1: Start block 100 - 12 = 88, end block 150

  • Run 2: Start block 150 - 12 = 138, end block 155

  • Run 3: Start block 155 - 12 = 143, end block 200

This way we are achieving a sliding block window that covers the entire chain. Worth noting, the checkpoint block is updated only if the given run completes with success. A run is considered successful when no single error was thrown by any of the components. If the run is not successful, the checkpoint block is not updated and the next run will cover the failed range again:

  • Run 1: Start block 100 - 12 = 88, end block 150 - failed

  • Run 2: Start block 100 - 12 = 88, end block 170 - failed

  • Run 3: Start block 100 - 12 = 88, end block 200 - success

  • Run 4: Start block 200 - 12 = 188, end block 250 - success

This way the monitoring tool guarantees to not miss any system events. However, such a solution can produce duplicated system events. To prevent that, the Manager component contains a deduplication logic that leverages the Persistence component to filter out already handled system events.

Build and deployment

Prerequisites

Please make sure you have the following prerequisites installed on your machine:

  • Node.js =14 (newer versions are not supported temporarily)

  • Yarn >1.22.0

Install dependencies

To install dependencies, run:

yarn install

NOTE: This package contains some transitive dependencies that are referenced via the unauthenticated git:// protocol. That protocol is no longer supported by GitHub. This means that in certain situations installation of the package or update of its dependencies using Yarn may result in The unauthenticated git protocol on port 9418 is no longer supported or fatal: unable to connect to github.com errors.

As a workaround, we advise changing Git configuration to use https:// protocol instead of git:// by executing:

git config --global url."https://".insteadOf git://

Build

To build the library, invoke:

yarn build

A dist directory containing the resulting artifacts will be created.

Run

A single run of the monitoring tool can be triggered using the Node runtime:

node .

The behavior can be configured using the following env variables:

Variable Description Mandatory

ENVIRONMENT

mainnet or testnet

Yes

ETHEREUM_URL

URL of the Ethereum node

Yes

ELECTRUM_URL

URL of the Electrum node

Yes

LARGE_DEPOSIT_THRESHOLD_SAT

Satoshi threshold used to determine which deposits are large. Default: 1000000000

No

LARGE_REDEMPTION_THRESHOLD_SAT

Satoshi threshold used to determine which redemptions are large. Default: 1000000000

No

DATA_DIR_PATH

Directory used to persist processing data. Default: ./data

No

SENTRY_DSN

DSN of the Sentry receiver. If not set, events are not dispatched to Sentry

No

DISCORD_WEBHOOK_URL

URL of the Discord receiver webhook. If not set, events are not dispatched to Discord

No

Deployment

Docker image

The monitoring tool can be used as a Docker container. To build the image invoke:

docker build -t tbtc-v2-monitoring .

Once the image is built, a single run of the monitoring tool can be triggered by doing:

docker run --volume /$(pwd)/data:/mnt/data \
  --env DATA_DIR_PATH=/mnt/data \
  --env <other-envs> \
  tbtc-v2-monitoring

Kubernetes

The monitoring tool can be deployed on Kubernetes as a CronJob. Example configuration can be found here.