Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Exporter; add resource labels to all metrics based on config flag #5529

Closed
mdonkers opened this issue Jun 12, 2023 · 5 comments
Closed
Labels
Feature Request Suggest an idea for this project

Comments

@mdonkers
Copy link

Is your feature request related to a problem? Please describe.
The Prometheus Exporter currently creates a target_info metric, based on a issue filed earlier.

I believe writing the target_info itself is fine. But at least as far as I can see not adding Resource labels to all the other metrics is problematic when scraping and processing metrics purely by Prometheus. Because unless I'm missing something, there is no way for Prometheus scrape configuration to pick up the target_info and add those labels to all other metrics. Which in turn makes it not possible to filter metrics e.g. by service.name.

Describe the solution you'd like
I'd like for a configuration flag / setting to add the Resource information to all metrics, so that they can be discovered properly in Prometheus.

Describe alternatives you've considered
I've looked at ways to pick up metrics from target_info and add them to all metrics, e.g. via relabeling_config for Prometheus but this doesn't seem to be possible.

Additional context
The Collector Prometheus Exporter has a specific flag to set this: resource_to_telemetry_conversion, see https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusexporter

I'm happy to provide a PR myself to create a flag and add this (based on my request above). But would first like to have a discussion whether I overlooked something or missed something, before putting in the time.

@mdonkers mdonkers added the Feature Request Suggest an idea for this project label Jun 12, 2023
@jkwatson
Copy link
Contributor

One thing to verify, before starting to work on code... does the otel specification have anything to say on this topic? If this is something that the OTel prometheus support should be doing in all languages, then it needs to get into the specification, so it can be done the same way across all the language implementations.

@mdonkers
Copy link
Author

So in the spec, there is a description for the Prometheus exporter format, specifically: Prometheus OpenMetrics compatibility

I find the description somewhat ambiguous though. The important part:

In OTLP, the service.name, service.namespace, and service.instance.id triplet is required to be unique, which makes them good candidates to use to construct job and instance. In the collector Prometheus exporters, the service.name and service.namespace attributes MUST be combined as <service.namespace>/<service.name>, or <service.name> if namespace is empty, to form the job metric label. The service.instance.id attribute, if present, MUST be converted to the instance label; otherwise, instance should be added with an empty value. Other resource attributes SHOULD be converted to a target_info metric, or MUST be dropped. The target_info metric is an info-typed metric whose labels MUST include the resource attributes, and MUST NOT include any other labels other than job and instance.

If I read that correctly, first it mentions to combine service.name, service.namespace and service.instance.id into the job and instance labels (for every metric). And move all other attributes to target_info.
But then the line:

The target_info metric is an info-typed metric whose labels MUST include the resource attributes, and MUST NOT include any other labels other than job and instance.

Which in itself is a conflicting statement?

Not sure where to properly get this addressed? (I'll try on the Cloud Native - OTel specification Slack).

But for me, transforming service.name, service.namespace and service.instance.id into the job and instance labels would be sufficient. And then have these labels present on every metric.
That is something I'd happily implement. Proposing to put it behind a flag not to break the current usage if needed.

@mdonkers
Copy link
Author

Closing, as the Java Prometheus Exporter is working according to the spec as discussed on the Cloud Native Slack

@parasjain993
Copy link

parasjain993 commented May 8, 2024

Do we know why this was closed? The slack thread seems have to have archived. Also I tried setting service.name, service.namespace and service.instance.id in OTEL_RESOURCE_ATTRIBUTES in my java application but I still don't see the translated job and instance attributes in my prometheus metrics?

@mdonkers
Copy link
Author

Do we know why this was closed? The slack thread seems have to have archived. Also I tried setting service.name, service.namespace and service.instance.id in OTEL_RESOURCE_ATTRIBUTES in my java application but I still don't see the translated job and instance attributes in my prometheus metrics?

The Slack thread is still available to me, in the Cloud Native Computing Slack organization, #otel-specification channel.

But the gist of it is that the Prometheus Exporter works according to the specification. And so first the spec would need to change, as otherwise this exporter would work differently than all other exporters.
What confused me, is that the OpenTelemetry Collector showed different behaviour but that was because it was still using OpenCensus back then. When forcibly switching to OpenTelemetry (which is the default by now) it works the same as this Prometheus Exporter.

Hope that clears it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Request Suggest an idea for this project
Projects
None yet
Development

No branches or pull requests

3 participants