You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
This is not about the bug but klaytn developers should know this situation i think. I.8.3 tag includes to change processing transactions.
It seems to require more computing power for ordering txs.
So, TPS was lower than previous version of 1.8.3. when I tested the performance of klaytn network that i configured.
Before 1.8.3 update, Klaytn network was stable for 4000 TPS even if a few nodes received all the load.
After updating to 1.8.3, The maximum tps was 2500 in a specific situation where a small number of nodes received all the load.
The test result was attached.
How to reproduce
If use the klaytn-deploy tool, configuring the klaytn network is easier.
The conf.json that contain the detail of network configuration will be attached.
Create Klaytn network with locust master and slave : CN4:PN8 (CN full mesh, PN ring)
Create about eight ENs and connect all of ENs to only two PN(one CC: one CN+two PNs).
mckim19
changed the title
Decreased TPS after v1.8.3(related to process tx FCFS)
Decreased TPS after v1.8.3(related to process transactions first come, first served)
Jul 14, 2022
aidan-kwon
changed the title
Decreased TPS after v1.8.3(related to process transactions first come, first served)
Decreased tx propagation capability after v1.8.3(related to process transactions first come, first served)
Jul 15, 2022
The block generation capability does not decrease much, but the block propagation capability from a single node decreased. So, the TPS can be recoverable when a network has more PN/EN like Cypress.
Increasing TPS is an ordinary development goal and this issue itself is hard to find a specific solution for the issue since the performance decrement was expected at that time. And, the recent experiment shows better performance than this. So, close this issue.
Describe the bug
This is not about the bug but klaytn developers should know this situation i think.
I.8.3 tag includes to change processing transactions.
It seems to require more computing power for ordering txs.
So, TPS was lower than previous version of 1.8.3. when I tested the performance of klaytn network that i configured.
Before 1.8.3 update, Klaytn network was stable for 4000 TPS even if a few nodes received all the load.
After updating to 1.8.3, The maximum tps was 2500 in a specific situation where a small number of nodes received all the load.
The test result was attached.
How to reproduce
If use the klaytn-deploy tool, configuring the klaytn network is easier.
The conf.json that contain the detail of network configuration will be attached.
CN 4, PN 8, EN 8, Locust master 1, Locust slave 4, grafana 1
RPS: 5,000
Users: 500
Hatch Rate: 100
NumAccForSignedTx: 100,000
ActiveAccPercent: 100
SlavesPerNode:5
Expected behavior
If Klaytn version is over 1.8.3, the maximum tps is under 2500.
Attachments
v1.8.0=> 4000tps
v1.8.2=> 4000tps
v1.8.3=> 2000~2500tps
v1.8.4=> 2000~2500tps
Configuration File
conf.json.txt
Environment
The text was updated successfully, but these errors were encountered: