-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[State Sync] DHT causing high resource utilization #5799
Comments
What is the order of growth of your network and of the content (number of provider records in the DB)? |
@guillaumemichel this network only had 4 nodes participating in the DHT. We don't expose any metrics on the provider records, so I'm not sure what the total count is, but we are producing 3-4 new records per second, and each node held ~10M blobs in their datastore. |
This means that all nodes are getting allocated all the data. For how long do you keep the provider records before flushing them? ( Do you keep republishing all the records over time? What datastore are you using? If you are using an in-memory data store, and the number of records keeps increasing, that may explain increased memory usage. |
we use the default, which appears to be 48 hours
yes, we had bitswap configured to reprovide every 12 hours. I don't believe we set a specific setting in the DHT, so whatever the default is for that.
we're using the memory map db
it definitely explains some general increases, but we were see 20+ GB spikes in memory for nodes with only a few million entries. Interestingly, only seeing this on one of our networks (luckily not mainnet) |
馃悶 Bug Report
Over the last year or so, we've had incidents of high resource utilization (memory, cpu, goroutines) across all node roles and networks (mainnet, testnet, lower environments). Profiles have all pointed to the IPFS DHT as the main culprit.
Context
DHT stands for Distributed Hash Table and is a library provided by the team behind IPFS for decentralized peer discovery and content routing on the IPFS network.
It's used for 2 usecases on the Flow network
On the public network node identities are not required to be static, so nodes need a peer-to-peer method to discover peers and maintain a map of the network.
Bitswap is a protocol for sharing arbitrary data among decentralized nodes. On Flow, it's used to share execution data between Execution and Access nodes. The DHT allows peers to announce which blobs of data they have, which makes syncing more efficient.
Originally, it was enabled for both the staked and public network for all nodes, even though it was only needed for Access and Execution nodes on staked network.
Past Issues
Originally, the issue manifested with a linear, unbounded goroutine leak observed on all node types. Nodes would eventually run out of memory and crash.
libp2p
ResourceManager
limits were tuned (#4846) and eventually the DHT was disabled on all nodes that did not require it (#5797). This resolved the issue for those nodes. However, the intermittent leaks persisted for Access and Execution nodes.This shows the resource manager blocking new streams which limited how high the goroutine leaks got
Upgrading
libp2p
and the DHT library (libp2p/go-libp2p-kad-dht
) (#5417), resolved the goroutine leak, but the issue then manifested in different ways. We started to observe spikes in goroutines that were capped around 8500, but memory utilization remained highCurrent Issues
Recently, we've been seeing 2 more issues that seem to be related:
Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.
There isn't a direct link between this issue and the DHT other than that 20-30% of CPU on the nodes was used for DHT related activities. Also, a high amount of allocations on the heap result in more frequent garbage collection.
Disabling the DHT
The main intention of the DHT is to make it efficient to disseminate the routing table of which blocks of data are stored on which nodes. The basic design makes a few assumptions:
On the Flow staked bitswap network, none of those assumptions are true.
Additionally, bitswap already has a built-in mechanism for discovering peers that have data the client wants. This mechanism is used first before looking into the DHT, so the DHT is rarely used in practice.
Given all of that, it seems there is limited value to run the DHT on the staked network, especially with amount of overhead.
See these comments on an analysis of disabling the DHT (#5798 (comment), #5798 (comment), #5798 (comment), #5798 (comment))
Next steps
Disabling the DHT seems to be a viable option for nodes on the staked network. It is still needed on the public network for peer discovery, though possibly not for bitswap. More investigation is needed to understand if these issues will also appear there.
We could also try these options for reducing the memory utilization:
We could also explore options for limiting which blobs are reprovided. e.g. only reprovide blobs from the last N blocks.
The text was updated successfully, but these errors were encountered: