-
Notifications
You must be signed in to change notification settings - Fork 474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
custom dashboard discovery - only look for discoverOn metrics #4367
custom dashboard discovery - only look for discoverOn metrics #4367
Conversation
916d850
to
63d90b3
Compare
Here is one implementation - it uses a set of "OR" conditions in the prom query. I am documenting this here because I might change this implementation to something different and I want a record of this way of doing it in case I want to revert back at some point in the future (perhaps we might find this is more efficient): // you can use "count" here instead of "sum" for possibly even more goodness
queryString := fmt.Sprintf("sum(%v%v) by (__name__)", metricNames[0], labelQueryString)
for i := 1; i < len(metricNames); i++ {
queryString = fmt.Sprintf("%v OR sum(%v%v) by (__name__)", queryString, metricNames[i], labelQueryString)
}
results, warnings, err := in.api.Query(in.ctx, queryString, time.Now())
if warnings != nil && len(warnings) > 0 {
log.Warningf("GetMetricsForLabels. Prometheus Warnings: [%s]", strings.Join(warnings, ","))
}
if err != nil {
return nil, errors.NewServiceUnavailable(err.Error())
}
// We should only get one timeseries for each metric family name. However, just in case
// we get duplicates, store the metric name in a map and convert to an array to remove duplicates.
namesMap := make(map[string]bool)
for _, item := range results.(model.Vector) {
namesMap[string(item.Metric["__name__"])] = true
}
names := make([]string, 0, 5)
for n := range namesMap {
names = append(names, n)
}
return names, nil |
75ea80e
to
42ba2dc
Compare
…row per metric family name that exists
… dynamic generation of OR conditions).
…ooking for, and loop over the results) - this will allow us to create a smaller map (most likely the results are going to be much larger than the list of metrics we are looking for)
42ba2dc
to
e0f3d7a
Compare
… we released this out in the wild and we need to know if this is a bottleneck
This is ready enough for people to review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with a few non-blocking comments
for i := 0; i < len(metricNames); i++ { | ||
metricsWeAreLookingFor[metricNames[i]] = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine but maybe nicer as:
for _, m := range metricNames {
metricsWeAreLookingFor[m] = true
I built and published a test image based on this PR - if someone wants to test it - use this image: |
I see comments on the code, but I don't see screenshots testing the feature. |
There are no screenshots to show because nothing will look different than how things look today. This just performs a different query that I hope makes things faster. But the UI will look identical to how it looks today. |
Have this work being tested against the https://github.com/kiali/demos/tree/master/runtimes-demo ? |
Built and deployed the server with this PR: commit 0f39e20 (HEAD -> dashboard-discovery-3704, jmazzitelli/dashboard-discovery-3704) and installed the runtimes-demo: Here's all the applications and workloads - you can see all the dashboards are discovered correctly: |
fixes: #3704
This is trying to work around the problem where api.Series is being called during dashboard discovery, which could return a huge amount of data.
We want to only look for specific metric names - those metrics listed as "discoverOn" metrics in the dashboards. Those are the only ones we care about.