Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance for DebugTraceCall #23633

Closed
FeurJak opened this issue Sep 24, 2021 · 5 comments
Closed

Performance for DebugTraceCall #23633

FeurJak opened this issue Sep 24, 2021 · 5 comments

Comments

@FeurJak
Copy link

FeurJak commented Sep 24, 2021

Is there any benchmark on the performance of DebugTraceCall available somewhere?

It takes 1ms to make a Call to an Address and get results, but if with the standard Call Tracer it can take 100ms - 300ms.

Even with a simple custom Tracer:
"{ hist: {}, step: function(log) { var error = log.getError(); if (error !== undefined) { this.fault(log); } }, fault: function(log) { this.hist['addr'] = toHex(toAddress(log.stack.peek(3).toString(16)));; }, result: function(ctx){return this.hist;}}"

Which just tells me which call to an Address has failed, it takes 60ms-70ms to get a response.

I apologies if I'm incorrect but I thought using DebugTraceCall is similar to making a standard call, except that the VM logs are interpreted with the specified JavaScript ? Which component is responsible for taking >10ms when Tracing a Call ?

@s1na
Copy link
Contributor

s1na commented Sep 27, 2021

Tracing comes with some overhead. Most of it is because on every opcode execution we have to execute JS code from Go which is expensive, specially if the transaction is computationally expensive.

#23087 was recently merged which comes with a faster callTracer. And if you only care about the status of internal calls then you can instead of step use the new enter and exit JS methods and that'll make tracing faster because this Go-JS overhead doesn't happen at the opcode-level, rather call-level.

@holiman
Copy link
Contributor

holiman commented Sep 27, 2021

As @s1na has pointed out, there's a heavy overhead. Each single EVM opcode execution becomes a switch from go-lang to C++, where it runs interpreted javascript. In your simple tracer, you have a getErrror() for each opcode, which is a call from C++ land back to go (and it's pretty pointless, because fault will be invoked on errors automatically. )

If your usecase fits it, you can skip step and implement enter and exit instead, which can boost the speed by orders of magnitude. If the usecase doesn't fit it, not much to do at this point.

@holiman holiman closed this as completed Sep 27, 2021
@FeurJak
Copy link
Author

FeurJak commented Sep 27, 2021

Really awesome replies guys ! I will check out the new merge. I'm operating with the BSC geth version and I don't think that was ever implemented, so will try to append the same changes.

@FeurJak
Copy link
Author

FeurJak commented Sep 28, 2021

@holiman @s1na , not sure what magic potion you guys were on, but those changes made a massive decrease in the time it takes for callTracer. Just applied the features on BSC and on my test it was reduced from 100ms down to 17ms, that's massive....

@holiman
Copy link
Contributor

holiman commented Sep 28, 2021

100ms down to 17ms, that's massive...

Thanks! But actually, I'd expect more than a 5x speed up for most calltrace ops, somewhere between1 and 3 orders of magnitude, but YMMV I guess

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants