Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Community feedback: what else should the GC be doing? #1972

Open
jpkrohling opened this issue Feb 28, 2024 · 3 comments
Open

Community feedback: what else should the GC be doing? #1972

jpkrohling opened this issue Feb 28, 2024 · 3 comments
Assignees

Comments

@jpkrohling
Copy link
Member

jpkrohling commented Feb 28, 2024

The OpenTelemetry Governance Committee (@open-telemetry/governance-committee) has been reflecting on its role within the OpenTelemetry project, making explicit some of the implicit assumptions we have had so far. During that exercise, we identified examples, such as project management, establishing an overall roadmap, and ensuring we have a healthy pool of contributors. Those clarifications were implemented as part of #1932.

We'd also like to hear from the OpenTelemetry community, including contributors, maintainers, users, and vendors, what they'd like to see the Governance Committee doing.

In your opinion, what should the Governance Committee be doing in addition to what we already have in our charter?

Leave your comments here until 31 March 2024. We'll evaluate the answers and provide feedback on the changes that we'll incorporate based on that.

@jpkrohling jpkrohling self-assigned this Feb 28, 2024
@jpkrohling jpkrohling changed the title Community feedback: what's else should the GC be doing? Community feedback: whas else should the GC be doing? Feb 28, 2024
@jpkrohling jpkrohling changed the title Community feedback: whas else should the GC be doing? Community feedback: what else should the GC be doing? Feb 28, 2024
@cartermp
Copy link
Contributor

I'd like to see an effort to advance the project to graduated status. I think, now that the initial vision has been delivered, and OTel is starting to see broad adoption across the industry, it's time to graduate from incubation and signal to to the early and late majority markets that this is tech you can rely on.

@woody1872
Copy link

woody1872 commented Feb 28, 2024

I think enforcing minimum documentation standards across all components in the ecosystem is worth a mention. There is a lot of really great documentation already, but also some not-so-great examples. I do think we have a bit of inconsistency here, and a lot of "check the comments in the source code" type of problems - which not everyone is going to be comfortable doing.

As an end-user, or as someone advocating for the usage of OpenTelemetry, when you run in to the latter it can be really challenging. Great documentation is really critical for adoption and support. Consistently getting this right across the ecosystem of components I feel is very important.

@jiekun
Copy link
Member

jiekun commented Feb 29, 2024

I feel that this may be too detailed, but I still want to mention it here. We already have a testbed and have covered some components, but evaluating resource usage is an important task for end users.

Many components still lack reference data. We have a large number of receivers, processors, connectors, and exporters—how many resources do they require? For example, how much CPU and memory are needed to add a few processors to the existing setup?

Typically, the evaluation needs to be based on the actual workload, and users inevitably have to perform performance tests. However, it is still possible to provide baseline data. For example, if 10k samples per second (sps) require 1 CPU and 1 GB of memory, then for my collector cluster with 600k sps, I can start with 60 CPUs and 60 GB of memory, and then make adjustments accordingly.

I hope someone (not necessarily from GC, as these are not directional issues for the whole project/community) can plan to improve the performance evaluation, at least for the collector, so that each component has reference performance data. In this way, users can make adjustments based on that foundation, rather than starting from scratch.

Edit: Ideally, we should have such a reference table (although it's not possible :) we can still provide other useful forms of reference).

CPU Memory(MiB)
attributesprocessor 0.5 256
cumulativetodeltaprocessor 0.5 1024
deltatocumulativeprocessor 0.5 256
deltatorateprocessor 1 256
filterprocessor 0.5 512
groupbyattrsprocessor 0.5 256
groupbytraceprocessor 1 1024
intervalprocessor 1 256
k8sattributesprocessor 2 1024
logstransformprocessor 1 256
metricsgenerationprocessor 0.5 256
metricstransformprocessor 0.5 512
probabilisticsamplerprocessor 2 256
redactionprocessor 1 1024
remotetapprocessor 0.5 256
resourcedetectionprocessor 1 512
apachesparkreceiver 0.5 256
awscloudwatchmetricsreceiver 0.5 1024
awscloudwatchreceiver 0.5 256
awscontainerinsightreceiver 1 256
awsecscontainermetricsreceiver 0.5 512
awsfirehosereceiver 0.5 256
awsxrayreceiver 1 1024
azureblobreceiver 1 256
azureeventhubreceiver 2 1024
azuremonitorreceiver 1 256
bigipreceiver 0.5 256

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants