-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix sample_and_watermark_test.go for bad luck, repeated test #106325
Fix sample_and_watermark_test.go for bad luck, repeated test #106325
Conversation
@@ -95,9 +95,9 @@ func TestSampler(t *testing.T) { | |||
} | |||
|
|||
/* getHistogramCount returns the count of the named histogram */ | |||
func getHistogramCount(regs Registerables, metricName string) (int64, error) { | |||
func getHistogramCount(registry compbasemetrics.KubeRegistry, metricName string, allowNotFound bool) (int64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd actually change this so that allowNotFound is not a parameter on the function and return 0, nil
as default fallback behavior. My reasons:
- cleaner function signature
- this is a private function which seems to be used exactly once in this file
- this is quite possibly just actually the correct behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The force-push to 9ec70d16b11 simplified that function without losing that bit of information if the test ever does fail that way.
3b2cca3
to
9ec70d1
Compare
/retest |
} | ||
return int64(*hist.SampleCount), nil | ||
} | ||
return 0, fmt.Errorf("not found, considered=%#+v", considered) | ||
return 0, errMetricNotFound |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why return an error, if it's not found, then 0 is the correct count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because there are two ways to get zero: metric not found, or metric found and contains zero. No need to lose that distinction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't think the metric exists until it gets written to...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is part of what is expected.
My point here is let's not needlessly discard a bit of information about why the get method returned zero, since the point of a test is to not assume that everything goes as expected. If that get method ever returns a zero when zero is not what's expected, it can be helpful to have a bit of explanatin of why for the zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a metric can't exist until it's written to, then the error condition is actually not here, it exists in like 119. If int64(*hist.SampleCount)
is equal to zero, this is a condition we do not expect and that should be an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am having trouble parsing "... is actually not here, it exists ...".
This is a behavioral unit test of the sample-and-watermark histograms including their underlying machinery. While us developers expect that the HistogramVec has no metrics before it is written, the point of a behavioral unit test is to not assume more than is necessary. The current revision of this PR can distinguish between different pathologies that lead to an unexpected zero. That seems better to me than not helping to identify what went wrong, in the case of an unexpected zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While us developers expect that the HistogramVec has no metrics before it is written...
This is a reasonable expectation given that this is how underlying Prometheus implementation actually works.
I am saying this is how it should look:
for _, mf := range mfs {
thisName := mf.GetName()
if thisName != metricName {
continue
}
metric := mf.GetMetric()[0]
hist := metric.GetHistogram()
if hist == nil {
return 0, errors.New("dto.Metric has nil Histogram")
}
if hist.SampleCount == nil {
return 0, errors.New("dto.Histogram has nil SampleCount")
}
count := int64(*hist.SampleCount)
if count == 0 {
return 0, errors.New("we should never have a 0 samplecount here")
}
return count, nil
}
return 0, nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is unnecessarily specific for this client of the Prometheus go library to insist that a HistogramVec whose label slice is empty start out in a state where the suggested code in the previous comment executes the return 0, nil
statement. Remember that calling NewHistogram produces a Histogram with a sample count of zero. So such a thing is perfectly fine, semantically. A HistogramVec whose label slice is empty can only ever have one Histogram in it. If the HistogramVec implementation were to choose to create the only possible Histogram in this case eagerly, who cares? Maybe somebody with other Prometheus use cases in mind, but I do not think that clients of sample-and-watermark histograms would care.
/test pull-kubernetes-kubemark-e2e-gce-scale |
/triage accepted |
/retest |
/test pull-kubernetes-kubemark-e2e-gce-scale |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: logicalhan, MikeSpreitzer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@MikeSpreitzer: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Heh... - the test was triggerred exactly when kubemark got broken by CSI migration for ~2h yesterday. /test pull-kubernetes-kubemark-e2e-gce-scale |
Maybe this is not the best vehicle for testing CI; I made #106413 for that purpose. |
I think that force-pushing the commit (e.g. rebasing on master) should do the job. |
9ec70d1
to
06e1716
Compare
The force-push to 06e1716 is a rebase onto master, for the purpose of canceling the request for the pull-kubernetes-kubemark-e2e-gce-scale job. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
What type of PR is this?
/kind bug
/kind failing-test
What this PR does / why we need it:
This PR fixes the unit test for sample-and-watermark histograms in two ways.
First, it makes the test correctly handle the cases where the first step of the fake clock does not cross the sampling threshold.
Second, it make the test work correctly if it is repeated in the same process and even if multiple invocations are run concurrently in the same process.
This is part of the campaign to make unit tests safe for repetition (#104940).
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
The second half of this is a simpler alternative to #105886 .
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: