New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracing: allow turning on the opentelemetry layer on and off by default #13002
Comments
Do we need to make the reload layer configurable? I see that it says "adds a small amount of overhead", but if we're going to run with it on in cloud I think we should also run with it on locally so that trace performance isn't somehow better when running locally! Adding this sounds good, though. Every service already has an internal HTTP server that's perfect for this. I think I would shy away from doing anything too fancy to propagate dynamic trace enablement across process boundaries. Instead we can push the complexity into a script in MaterializeInc/cloud that loops through all the pods in a namespace and frobs the trace enablement endpoints on all of them. |
Another option would be to push diagnostic knobs like telemetry settings, debug levels, etc into a |
@benesch yes we can set to |
I can imagine piggy-backing the somewhat heavy traces of the various representations of a query in the optimizer lifecycle conditionally, and the reload layer configurable is probably the way to do it. I should be able to do that if |
Blocked on tokio-rs/tracing#2159 |
Fixed by #13361. |
Otel traces for cloud instances are already eating up honeycomb limits. To prevent problems, and to
Steps:
reload::Layer<Option<[otel layer]>>
, defaulting toNone
if a--dynamic-opentelemetry
flag is onThe text was updated successfully, but these errors were encountered: