-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GSoC] KEP for Project 6: Push-based Metrics Collection for Katib #2328
base: master
Are you sure you want to change the base?
[GSoC] KEP for Project 6: Push-based Metrics Collection for Katib #2328
Conversation
Signed-off-by: Electronic-Waste <2690692950@qq.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Electronic-Waste <2690692950@qq.com>
/area gsoc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this @Electronic-Waste!
I left a few comments.
/assign @kubeflow/wg-training-leads
|
||
In the procedure of tuning hyperparameters, Metrics Collector, which is implemented as a sidecar container attached to each training container in the [current design](https://github.com/kubeflow/katib/blob/master/docs/proposals/metrics-collector.md), will collect training logs from Trials once the training is complete. Then, the Metrics Collector will parse training logs to get appropriate metrics like accuracy or loss and pass the evaluation results to the HyperParameter tuning algorithm. | ||
|
||
However, current implementation of Metrics Collector is pull-based, raising some [design problems](https://github.com/kubeflow/training-operator/issues/722#issuecomment-405669269) such as determining the frequency we scrape the metrics, performance issues like the overhead caused by too many sidecar containers, and restrictions on developing environments which must support sidecar containers. Thus, we should implement a new API for Katib Python SDK to offer users a push-based way to store metrics directly into the Kaitb DB and resolve those issues raised by pull-based metrics collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, current implementation of Metrics Collector is pull-based, raising some [design problems](https://github.com/kubeflow/training-operator/issues/722#issuecomment-405669269) such as determining the frequency we scrape the metrics, performance issues like the overhead caused by too many sidecar containers, and restrictions on developing environments which must support sidecar containers. Thus, we should implement a new API for Katib Python SDK to offer users a push-based way to store metrics directly into the Kaitb DB and resolve those issues raised by pull-based metrics collection. | |
However, current implementation of Metrics Collector is pull-based, raising some [design problems](https://github.com/kubeflow/training-operator/issues/722#issuecomment-405669269) such as determining the frequency we scrape the metrics, performance issues like the overhead caused by too many sidecar containers, and restrictions on developing environments which must support sidecar containers. Thus, we should implement a new API for Katib Python SDK to offer users a push-based way to store metrics directly into the Katib DB and resolve those issues raised by pull-based metrics collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is a spelling mistake...
![](../images/push-based-metrics-collection.png) | ||
Fig.1 Architecture of the new design | ||
|
||
## Goal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to add No Goals for this project.
For example:
Implement authentication model for Katib DB to push metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll add non-goals to this KEP.
|
||
## Goal | ||
1. **A new parameter in Python SDK function `tune`**: allow users to specify the method of collecting metrics(push-based/pull-based). | ||
2. **A code injection function in mutating webhook**: recognize the metrics output lines and replace them with push-based metrics collection code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to perform any mutation if it is push-based ?
|
||
### New Parameter in Python SDK Function `tune` | ||
|
||
We decided to add `metrics_collection_mechanism` to `tune` function in Python SDK. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about parameter metrics_collector_config
to be consistent with Experiment APIs and support other metrics collector configurations in tune
API in the future.
Initially, we can just support type:
tune(
metrics_collector_config = {
"kind": "None"
}
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. It can be extensible if we use dict as parameter type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, we should also discuss if we should rename Metrics Collector from None
to Push
or any other name.
Since it is not correct to call this metrics collector: None
. cc @gaocegege
) | ||
``` | ||
|
||
## Implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to explain what Katib controllers changes will be required. E.g. Trial controller should verify that metrics were reported to the Katib DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that Trial controller should have a response like StatusCode in http from Katib DB and process it in the further steps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, we should just verify if Trial controller will update Trial observation correctly in case of None
Metrics Collector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I get your idea. Do you mean that Trial controller will report an error if we do not change it to be compatible with push-based metrics collection?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, metrics collector inserts unavailable
value for each metrics into DB if that metrics can't be found: https://github.com/kubeflow/katib/blob/master/pkg/metricscollector/v1beta1/file-metricscollector/file-metricscollector.go#L181-L193.
And since isObservationAvailable
returns false, we marked Trial with Metrics Unavailable status. We need to see how will it work for push-based Metrics collector.
|
||
### Code Injection in Webhook | ||
|
||
We decided to implement a code replacing function in Experiment Mutating Webhook. When `spec.metricsCollectionSpec.collector.kind` is set to `NoneCollector`, the code replacing function will recognize the metrics output lines (e.g. print, log.Info, e.t.c.) and replace them with push-based metrics collection code which will be discussed in the next section. It’s a better decision compared with offering users a `katib_client.push`-like interface, for that users can’t use a yaml file to define this operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if we should replace user code here. I would suggest, we should implement it using katib_client.report_metrics("lr": 0.03)
or katib_client.log_metrics("lr": 0.01")
. That will allow user to explicitly report metrics during training code to the Katib DB.
Also, we should always pass KATIB_TRIAL_NAME
env variable to the Katib Trials, so training function knows where to report metrics to the Katib DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what I want to discuss with you in the upcoming weekly meeting.
|
||
### Push-based Metrics Collection Code | ||
|
||
The push-based metrics collection code is a function making a grpc call to the persistent API to store training metrics. It will be injected to container args in the Experiment Mutating Webhook and then be called inside the Trial Worker Pod to push metrics to Katib DB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to explain the gRPC call that we need to make to report metrics to Katib DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I will explain it in detail in the next few commits.
What this PR does / why we need it:
This PR converts the GSoC Proposal into KEP, which describes the details of implementing "Project 6: Push-based Metrics Collection for Katib". There are some related issues discussing about this :
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Checklist: