Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃悰 Start the Cache if the Manager has already started #1681

Merged

Conversation

jsanda
Copy link
Contributor

@jsanda jsanda commented Oct 4, 2021

Fixes #1673.

This is needed to better support multi-cluster controllers where you want to dynamically add Clusters after the Manager has already started and have their caches started and synced.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 4, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @jsanda!

It looks like this is your first PR to kubernetes-sigs/controller-runtime 馃帀. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/controller-runtime has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 馃槂

@k8s-ci-robot
Copy link
Contributor

Hi @jsanda. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 4, 2021
@jsanda
Copy link
Contributor Author

jsanda commented Oct 4, 2021

@alvaroaleman if my changes look reasonable I can add a new test in manager_test.go.

@@ -211,6 +211,12 @@ func (cm *controllerManager) Add(r Runnable) error {
cm.nonLeaderElectionRunnables = append(cm.nonLeaderElectionRunnables, r)
} else if hasCache, ok := r.(hasCache); ok {
cm.caches = append(cm.caches, hasCache)
if cm.started {
cm.startRunnable(hasCache)
if !hasCache.GetCache().WaitForCacheSync(cm.internalCtx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will never timeout, right? Does that match the standard path, where we start the caches before the cm is started?

/ok-to-test

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could definitely time out. The remote cluster might not be accessible. Should the cache be started and synced with a different context? If the sync times out, I would then cancel the context. I suppose it also makes more sense to append the cache after the sync completes successfully.

I believe this matches the standard path. I can't simply call waitForCache because it returns immediately if cm.started is true. It first iterates through the caches and calls cm.startRunnable for each one. Then it iterates through the caches again and calls cache.GetCache().WaitForCacheSync(ctx) where ctx is cm.internalCtx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After looking through the code some more, and the WaitForCacheSync function in shared_informer.go in client-go in particular, I am less certain that it will time out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant was if the normal route of starting caches has a context that times out, but doesn't seem to be the case so this is fine

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 4, 2021
Copy link
Member

@alvaroaleman alvaroaleman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but please also add a testcase for "Adding a cluster after manager was started results in a working cache"

@@ -211,6 +211,12 @@ func (cm *controllerManager) Add(r Runnable) error {
cm.nonLeaderElectionRunnables = append(cm.nonLeaderElectionRunnables, r)
} else if hasCache, ok := r.(hasCache); ok {
cm.caches = append(cm.caches, hasCache)
if cm.started {
cm.startRunnable(hasCache)
if !hasCache.GetCache().WaitForCacheSync(cm.internalCtx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant was if the normal route of starting caches has a context that times out, but doesn't seem to be the case so this is fine

@@ -211,6 +211,12 @@ func (cm *controllerManager) Add(r Runnable) error {
cm.nonLeaderElectionRunnables = append(cm.nonLeaderElectionRunnables, r)
} else if hasCache, ok := r.(hasCache); ok {
cm.caches = append(cm.caches, hasCache)
if cm.started {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This variable may only be accessed after acquiring cm.mu. Also warning, #1689 changes some of the lock handling so that might end up conflicting Nevermind, we acquire that lock in the beginning of Add so this is fine

Add a test that adds a cluster to the manager after the manager has
already started. Verify that the cluster is started and its cache
is sycned.

Added the startClusterAfterManager struct which is a basically just a
hook to verify that the cluster is started.
@jsanda jsanda marked this pull request as ready for review October 8, 2021 21:35
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 8, 2021
@jsanda
Copy link
Contributor Author

jsanda commented Oct 8, 2021

@alvaroaleman I added a test. I added a fake cluster impl as a hook to verify the cluster is started. It feels a bit clunky. Open to suggestions if you have a better/simpler approach.

return c.informer.Start(ctx)
}

func (c *startClusterAfterManager) GetCache() cache.Cache {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to implement anything other than GetCache and Start for this test

Only GetCache and Start methods are neeed for new test that adds a
cluster to the manage after the manager has already started.
@alvaroaleman alvaroaleman added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Oct 8, 2021
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 8, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alvaroaleman, jsanda

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 8, 2021
@k8s-ci-robot k8s-ci-robot merged commit 3e870eb into kubernetes-sigs:master Oct 8, 2021
@k8s-ci-robot k8s-ci-robot added this to the v0.10.x milestone Oct 8, 2021
@alvaroaleman
Copy link
Member

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Manager.Add does not start Cache after Manager has already started
3 participants