Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TracerProvider::force_flush moved from API to SDK without any way to downcast GlobalTracerProvider #1423

Closed
ramosbugs opened this issue Dec 2, 2023 · 9 comments
Labels
bug Something isn't working triage:needmoreinfo Has been triaged and requires more information from the filer.

Comments

@ramosbugs
Copy link
Contributor

What happened?

PR #658 (issue #588) moved the force_flush function from the opentelemetry::trace::TracerProvider trait to the opentelemetry_sdk::trace::TracerProvider struct. However, there does not appear to be any API to downcast the GlobalTracerProvider to opentelemetry_sdk::trace::TracerProvider.

Prior to the 0.18.0 release, it was possible to do the following:

opentelemetry::global::tracer_provider().force_flush()

Since opentelemetry::global::set_tracer_provider takes ownership of the provider, I don't think there is any way to actually call this function with the current public API.

cc: @rohithv1997 @TommyCpp @jtescher

API Version

0.18.0

SDK Version

0.18.0

What Exporters are you seeing the problem on?

No response

Relevant log output

No response

@ramosbugs ramosbugs added bug Something isn't working triage:todo Needs to be traiged. labels Dec 2, 2023
@cijothomas
Copy link
Member

@cijothomas cijothomas added triage:needmoreinfo Has been triaged and requires more information from the filer. and removed triage:todo Needs to be traiged. labels Dec 5, 2023
@ramosbugs
Copy link
Contributor Author

@cijothomas: Thanks for the quick reply!

The use case here is FaaS/"serverless" (e.g., AWS Lambda). Specifically, we need a way to flush traces after a request completes, but before AWS Lambda freezes the container to wait for the next request (which may never come, in which case unflushed traces are lost). Prior to the 0.18 release, I would have called opentelemetry::global::tracer_provider().force_flush() here, but that's no longer possible.

Shutting down and registering a new tracing provider for each request seems like it would introduce a lot of unnecessary overhead, when we just need access to the force_flush method.

@ramosbugs
Copy link
Contributor Author

Also, I'm willing to contribute a PR to get this done once there's consensus on the API changes.

@jtescher
Copy link
Member

jtescher commented Dec 5, 2023

@ramosbugs how/where is the tracer provider being created in this case?

@ramosbugs
Copy link
Contributor Author

I'm using the following code when the Lambda process starts (i.e., from main), before it enters its request loop:

let batch = BatchSpanProcessor::builder(exporter, opentelemetry::runtime::Tokio)
  .with_max_queue_size(MAX_SPAN_QUEUE_SIZE)
  .build();

let provider = TracerProvider::builder()
  .with_span_processor(batch)
  .with_config(tracer_config)
  .build();

opentelemetry::global::set_tracer_provider(provider);

@ramosbugs
Copy link
Contributor Author

One possible solution is to impl<T> TracerProvider for Arc<T> where T: TracerProvider. Then, we can just pass a copy of the Arc<opentelemetry_sdk::trace::TracerProvider> to set_tracer_provider and maintain our own copy for access to force_flush.

I don't see anything that requires a mutable reference to the provider, so an Arc seems like it should work?

@jtescher
Copy link
Member

jtescher commented Dec 5, 2023

TracerProvider is already clone, so you don't need the Arc, you can pass a clone to set_tracer_provider directly 👍

@ramosbugs
Copy link
Contributor Author

Oh, I totally missed that! Thank you, I will confirm that this works and then close the issue or report back otherwise.

@ramosbugs
Copy link
Contributor Author

Cloning the provider before passing it to set_tracer_provider works. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage:needmoreinfo Has been triaged and requires more information from the filer.
Projects
None yet
Development

No branches or pull requests

3 participants