Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate with existing metrics code #432

Merged
merged 22 commits into from Oct 14, 2022

Conversation

avillela
Copy link
Contributor

@avillela avillela commented Oct 11, 2022

Fixes #72

Changes

  • Clean up requirements.txt
    • Use opentelemetry-distro to install relevant OpenTelemetry packages
    • Since we are installing psutil, opentelemetry-bootstrap will also install opentelemetry-instrumentation-system-metrics
  • Update recommendationservice's Dockerfile to:
    • Install specific version of grpcio-tools to avoid compatibility issues with protobuf library versions.
    • Install opentelemetry-bootstrap to install auto-instrumentation libraries
  • Clean up metrics and tracing init
  • Move metrics definitions to metrics.py

The following metrics are available through auto-instrumentation, courtesy of the opentelemetry-instrumentation-system-metrics, which is installed as part of opentelemetry-bootstrap on building the recommendationservice Docker image:

  • runtime.cpython.cpu_time

  • runtime.cpython.memory

  • runtime.cpython.gc_count

  • Appropriate CHANGELOG.md updated for non-trivial changes

@avillela avillela marked this pull request as ready for review October 11, 2022 20:27
@avillela avillela requested a review from a team as a code owner October 11, 2022 20:27
@cartersocha
Copy link
Contributor

Please update the metric features doc under the docs folder to reflect the added feature too 😃

@cartersocha
Copy link
Contributor

I'm getting an odd error. Cursory google searching doesn't show a clear answer.

#0 13.82 × Encountered error while trying to install package.
#0 13.82 ╰─> psutil

I installed the package but still getting an error. Any special dependency requirements?

Copy link
Member

@julianocosta89 julianocosta89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not a heavy user of metrics so I'm not sure how it would be better to handle the naming.
I see here that we have cpu_usage, ram_usage.
Should we add an identifier to those metrics?

Just wondering how that will grow whenever we have 10 services with the same metric name.

docker-compose.yml Show resolved Hide resolved
src/recommendationservice/metrics.py Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
docker-compose.yml Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
src/recommendationservice/metrics.py Outdated Show resolved Hide resolved
src/recommendationservice/metrics.py Outdated Show resolved Hide resolved
src/recommendationservice/metrics.py Outdated Show resolved Hide resolved
src/recommendationservice/requirements.txt Outdated Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
@mviitane
Copy link
Member

I think traces are not working with this PR without adding:
OTEL_TRACES_EXPORTER to docker-compose.yml

@fatsheep9146
Copy link
Contributor

fatsheep9146 commented Oct 12, 2022

I'm a little confused that why not using python instrumentation library to get the system metric such as cpu usage.

I'm not quite familiar about python, maybe my confusion is wrong, you can correct me. But what I really want to say is, when it come's to add new metrics or traces, we should first consider using the instrumentation library instead of manually creating one.

@reyang
Copy link
Member

reyang commented Oct 12, 2022

But what I really want to say is, when it come's to add new metrics or traces, we should first consider using the instrumentation library instead of manually creating one.

💯

@avillela
Copy link
Contributor Author

I'm a little confused that why not using python instrumentation library to get the system metric such as cpu usage.

I'm not quite familiar about python, maybe my confusion is wrong, you can correct me. But what I really want to say is, when it come's to add new metrics or traces, we should first consider using the instrumentation library instead of manually creating one.

That was me just messing around with metrics and trying to understand them, and I forgot to remove the unnecessary stuff.

@avillela
Copy link
Contributor Author

I think traces are not working with this PR without adding:
OTEL_TRACES_EXPORTER to docker-compose.yml

I was able to send traces without issue...

@avillela
Copy link
Contributor Author

I'm still not a heavy user of metrics so I'm not sure how it would be better to handle the naming. I see here that we have cpu_usage, ram_usage. Should we add an identifier to those metrics?

Just wondering how that will grow whenever we have 10 services with the same metric name.

That waste messing around with learning metrics. I'd forgotten to remove these.

@avillela avillela requested review from mviitane and julianocosta89 and removed request for mviitane October 12, 2022 18:39
Copy link
Member

@julianocosta89 julianocosta89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great one!

@mviitane
Copy link
Member

mviitane commented Oct 13, 2022

I think traces are not working with this PR without adding:
OTEL_TRACES_EXPORTER to docker-compose.yml

I was able to send traces without issue...

OK, then it was something in my environment. It seems otlp is the default.

I actually think it makes sense to set both of these env variables or then none of them (just for consistency, I believe the end result will be the same):

OTEL_TRACES_EXPORTER=otlp
OTEL_METRICS_EXPORTER=otlp

@avillela
Copy link
Contributor Author

I think traces are not working with this PR without adding:
OTEL_TRACES_EXPORTER to docker-compose.yml

I was able to send traces without issue...

OK, then it was something in my environment. It seems otlp is the default.

I actually think it makes sense to set both of these env variables or then none of them (just for consistency, I believe the end result will be the same):

OTEL_TRACES_EXPORTER=otlp
OTEL_METRICS_EXPORTER=otlp

Yes, good point. That was kinda bugging me too. :) Updated.

@avillela avillela requested review from fatsheep9146 and mviitane and removed request for mviitane and fatsheep9146 October 13, 2022 14:41
Copy link
Member

@mviitane mviitane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fatsheep9146
Copy link
Contributor

https://github.com/open-telemetry/opentelemetry-demo/blob/main/docs/metric_service_features.md

I think this file can also be updated, since Recommendation service auto instrumentation is done.

@avillela
Copy link
Contributor Author

avillela commented Oct 13, 2022

https://github.com/open-telemetry/opentelemetry-demo/blob/main/docs/metric_service_features.md

I think this file can also be updated, since Recommendation service auto instrumentation is done.

Added @fatsheep9146

docs/services/recommendationservice.md Outdated Show resolved Hide resolved
docs/services/recommendationservice.md Show resolved Hide resolved
docs/services/recommendationservice.md Show resolved Hide resolved
Juliano Costa and others added 4 commits October 13, 2022 22:20
@avillela avillela requested review from julianocosta89 and removed request for fatsheep9146 October 13, 2022 20:27
docs/services/recommendationservice.md Outdated Show resolved Hide resolved
@austinlparker
Copy link
Member

is the requested change actually blocking anything? this needs to merge today to get into 0.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Review and enhance metric support for recommendation service (Python)
7 participants