Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Meter.RecordBatch functionality, not specified in the specs #2449

Closed
wants to merge 1 commit into from

Conversation

bogdandrutu
Copy link
Member

@bogdandrutu bogdandrutu commented Dec 13, 2021

Updates #2462

Signed-off-by: Bogdan Drutu bogdandrutu@gmail.com

@codecov
Copy link

codecov bot commented Dec 13, 2021

Codecov Report

Merging #2449 (d5f3b92) into main (43daab9) will decrease coverage by 0.2%.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##            main   #2449     +/-   ##
=======================================
- Coverage   76.3%   76.1%   -0.3%     
=======================================
  Files        173     173             
  Lines      12010   11939     -71     
=======================================
- Hits        9168    9089     -79     
- Misses      2597    2604      +7     
- Partials     245     246      +1     
Impacted Files Coverage Δ
internal/metric/global/meter.go 89.0% <ø> (-0.4%) ⬇️
internal/metric/registry/registry.go 86.4% <ø> (-0.5%) ⬇️
metric/metric.go 55.0% <ø> (+0.3%) ⬆️
metric/metric_instrument.go 87.6% <ø> (-1.6%) ⬇️
metric/metrictest/meter.go 92.1% <ø> (-2.3%) ⬇️
metric/sdkapi/sdkapi.go 100.0% <ø> (ø)
sdk/metric/sdk.go 74.5% <ø> (-5.6%) ⬇️
exporters/jaeger/jaeger.go 92.6% <0.0%> (-0.9%) ⬇️
... and 1 more

@bogdandrutu bogdandrutu force-pushed the rmbatchrecord branch 2 times, most recently from d10e2c3 to 2c3f448 Compare December 13, 2021 23:32
@bogdandrutu bogdandrutu added the area:metrics Part of OpenTelemetry Metrics label Dec 14, 2021
CHANGELOG.md Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Copy link
Contributor

@MadVikingGod MadVikingGod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also give BatchObserver the same treatment?

Comment on lines -318 to -335
func benchmarkBatchRecord8Labels(b *testing.B, numInst int) {
const numLabels = 8
ctx := context.Background()
fix := newFixture(b)
labs := makeLabels(numLabels)
var meas []sdkapi.Measurement

for i := 0; i < numInst; i++ {
inst := fix.meterMust().NewInt64Counter(fmt.Sprintf("int64.%d.sum", i))
meas = append(meas, inst.Measurement(1))
}

b.ResetTimer()

for i := 0; i < b.N; i++ {
fix.accumulator.RecordBatch(ctx, labs, meas...)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like RecordBatch is a form of optimization that was taken. The alternatives seem to be calling Record N times for different synchronous instruments, and Observing() N Async instruments ina BatchObserver.

Could we please get a replacement benchmark to show the performance difference using these cases?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as it is a major part of the API: new method/function on Meter + new method/function on all the instruments this needs to be in the specification in order for this to be compliant, so not sure why I need to do this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I will send a separate PR for the BatchObserver.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the reason why we might not want to have to support this specific API, but there is a use case here. Specifically "As a User I would like to record a number of metrics at once.", I just want to make sure that the performance of this use case hasn't degraded.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond simply "record a bunch of metrics at once", it's also "record a bunch of metrics that come from an expensive operation without performing that operation once per metric".

I think that regardless what is in the spec we have agreed that SIGs can choose to have more API surface than required and this may be a situation where we choose to do so. The decision to add this capability was not taken lightly and I don't think the decision to remove it should be either. I'd like to have this discussed at the Go SIG and definitely want input from @jmacd, who helped define the initial need for this capability and its implementation.

@bogdandrutu
Copy link
Member Author

Should we also give BatchObserver the same treatment?

@MadVikingGod I will in a followup PR. One change per PR to make reviews easier.

@MadVikingGod
Copy link
Contributor

I will in a followup PR. One change per PR to make reviews easier.

👍 for this approach.
It would be helpful in reviewing these different PRs with what the end goal is.

@Aneurysm9
Copy link
Member

I will in a followup PR. One change per PR to make reviews easier.

👍 for this approach. It would be helpful in reviewing these different PRs with what the end goal is.

Agreed. It is difficult for us to evaluate changes presented in isolation and without any discussion of what they are attempting to achieve, why, and how they relate to other pending changes.

@bogdandrutu
Copy link
Member Author

@Aneurysm9 I think this change is very descriptive, it removes an undefined functionality in the specification. What more context do you need? I fail to understand, since this does not add a "partial" implementation or anything. I think a good question is why just this? The answer is this is the first API that I saw and was not specified.

@bogdandrutu
Copy link
Member Author

@Aneurysm9 you have the issue, does that explain the motivation?

@bogdandrutu
Copy link
Member Author

@MadVikingGod can you provide clear comment on how to unblock this? Since we discussed the followup of BatchObserver, is anything else blocking this?

@MadVikingGod
Copy link
Contributor

There are two things that I would want.

  1. Evidence that there isn't a performance degradation using repeated RecordOne (or another option) over the current RecordBatch. Including some documentation on how to achieve this for people already using this API. We can break them as it's not 1.0, but we should still tell them how to keep doing what they are already doing.
  2. Some reasoning on why this isn't needed any longer. As was pointed out in other PRs, "it's not in the spec" is not sufficient for removal as we are free to define methods beyond what is specified in the spec.

@bogdandrutu
Copy link
Member Author

bogdandrutu commented Dec 18, 2021

Some reasoning on why this isn't needed any longer. As was pointed out in other PRs, "it's not in the spec" is not sufficient for removal as we are free to define methods beyond what is specified in the spec.

I disagree with this, under opentelemetry SIG projects we should not start adding random APIs that may block future progress of the project. I would like to ask the @open-telemetry/technical-committee about this because I think this is misunderstood. OpenTelemetry does want the SIGs to model the APIs (abstractions, concepts) that specs define in a language specific way, but I don't believe that OpenTelemetry allows SIGs to define new abstractions/concepts.

@tigrannajaryan
Copy link
Member

I disagree with this, under opentelemetry SIG projects we should not start adding random APIs that may block future progress of the project. I would like to ask the @open-telemetry/technical-committee about this because I think this is misunderstood. OpenTelemetry does want the SIGs to model the APIs (abstractions, concepts) that specs define in a language specific way, but I don't believe that OpenTelemetry allows SIGs to define new abstractions/concepts.

There is a fine line to walk here: we don't want language implementations to end up polluting API namespace in a way that makes future additions to the spec difficult/impossible to implement in that language, but we also don't want to prevent implementations to do necessary language-specific improvements and optimizations.

This likely largely depends on what we consider polluting the API and what we consider necessary improvements. Ideally we want to have clear guidelines around this, although I suspect in many situations it may need to be decided on a case by case basis and is going to be fairly subjective.

@jsuereth
Copy link

Not sure it helps this discussion much, but we're ALSO looking at prototyping batch recording in Java SIG, as it's a much asked for feature. I suspect it'll be the next top Metrics API discussion post SDK stability. I think we should find ways for SiGs to prototype APis users need prior to specifying. We're seeing a lot of issues with lack of early prototypes in the Metrics SDK specification right now, especially around View API.

@bogdandrutu
Copy link
Member Author

bogdandrutu commented Jan 4, 2022

@jsuereth Having it as a prototype is fine, but needs to be marked accordingly (comments or package name, etc.) But as you said, Java is looking to do a prototype, if this will be completely different in Java probably is not good as we aim for consistent abstractions/semantics.

@MrAlias
Copy link
Contributor

MrAlias commented Feb 4, 2022

Is this superseded by #2557?

cc @MadVikingGod

@MrAlias
Copy link
Contributor

MrAlias commented Mar 18, 2022

Closing due to this being based on the old API.

@MrAlias MrAlias closed this Mar 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:metrics Part of OpenTelemetry Metrics
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants