Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shared index informer is watching resources in all namespaces even though they are intended to watch in particular namespace #2331

Closed
skpandey91 opened this issue Jul 4, 2020 · 18 comments
Assignees

Comments

@skpandey91
Copy link

skpandey91 commented Jul 4, 2020

Below is code to create SharedIndexInformer, I wanted to watch in a particular namespace only but it is watching in all the namespaces.
Please suggest

public SharedIndexInformer<T> idmSharedIndexInformerFor() {
        CustomResourceDefinitionContext CrdContext = new CustomResourceDefinitionContext.Builder()
                .withVersion(this.version)
                .withScope(this.scope)
                .withGroup(this.group)
                .withPlural(this.plural)
                .build();
        return sharedInformerFactory.sharedIndexInformerForCustomResource(CrdContext, apiTypeClass, apiListTypeClass,new OperationContext().withNamespace("default"), 60 * 1000L);
    }

Starting it like below
sharedInformerFactory.startAllRegisteredInformers();

@rohanKanojia
Copy link
Member

@skpandey91 : Which version of client are you using?

@skpandey91
Copy link
Author

skpandey91 commented Jul 6, 2020

@skpandey91 : Which version of client are you using?

4.10.2 version I am using, As it is trying to list, watch CRs in all namespaces , it is mandatory to create cluster role binding, but we work with rolebinding only ..PLease suggest we are blocked..
Normal watcher worked with one namspace with only rolebinding. We are moving to informer from normal watcher as normal watcher gives Too old resource if number of resources are huge.

@rohanKanojia
Copy link
Member

This is quite strange since we have a test for this case 😕 . Would it be possible for you to share a reproducible test case ?

void testWithOperationContextArgumentForCustomResource() throws InterruptedException {
String startResourceVersion = "1000", endResourceVersion = "1001";
PodSetList podSetList = new PodSetList();
podSetList.setMetadata(new ListMetaBuilder().withResourceVersion(startResourceVersion).build());
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets")
.andReturn(200, podSetList).once();
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets?resourceVersion=" + startResourceVersion + "&watch=true")
.andUpgradeToWebSocket()
.open()
.waitFor(WATCH_EVENT_EMIT_TIME)
.andEmit(new WatchEvent(getPodSet("podset1", endResourceVersion), "ADDED"))
.waitFor(OUTDATED_WATCH_EVENT_EMIT_TIME)
.andEmit(outdatedEvent).done().always();
KubernetesClient client = server.getClient();
CustomResourceDefinitionContext crdContext = new CustomResourceDefinitionContext.Builder()
.withVersion("v1alpha1")
.withScope("Namespaced")
.withGroup("demo.k8s.io")
.withPlural("podsets")
.build();
SharedInformerFactory sharedInformerFactory = client.informers();
SharedIndexInformer<PodSet> podSetSharedIndexInformer = sharedInformerFactory.sharedIndexInformerForCustomResource(crdContext, PodSet.class, PodSetList.class, new OperationContext().withNamespace("ns1"), 60 * WATCH_EVENT_EMIT_TIME);
CountDownLatch foundExistingPodSet = new CountDownLatch(1);
podSetSharedIndexInformer.addEventHandler(
new ResourceEventHandler<PodSet>() {
@Override
public void onAdd(PodSet podSet) {
if (podSet.getMetadata().getName().equalsIgnoreCase("podset1")) {
foundExistingPodSet.countDown();
}
}
@Override
public void onUpdate(PodSet oldPodSet, PodSet newPodSet) { }
@Override
public void onDelete(PodSet podSet, boolean deletedFinalStateUnknown) { }
});
sharedInformerFactory.startAllRegisteredInformers();

@skpandey91
Copy link
Author

skpandey91 commented Jul 6, 2020

This is quite strange since we have a test for this case 😕 . Would it be possible for you to share a reproducible test case ?

void testWithOperationContextArgumentForCustomResource() throws InterruptedException {
String startResourceVersion = "1000", endResourceVersion = "1001";
PodSetList podSetList = new PodSetList();
podSetList.setMetadata(new ListMetaBuilder().withResourceVersion(startResourceVersion).build());
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets")
.andReturn(200, podSetList).once();
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets?resourceVersion=" + startResourceVersion + "&watch=true")
.andUpgradeToWebSocket()
.open()
.waitFor(WATCH_EVENT_EMIT_TIME)
.andEmit(new WatchEvent(getPodSet("podset1", endResourceVersion), "ADDED"))
.waitFor(OUTDATED_WATCH_EVENT_EMIT_TIME)
.andEmit(outdatedEvent).done().always();
KubernetesClient client = server.getClient();
CustomResourceDefinitionContext crdContext = new CustomResourceDefinitionContext.Builder()
.withVersion("v1alpha1")
.withScope("Namespaced")
.withGroup("demo.k8s.io")
.withPlural("podsets")
.build();
SharedInformerFactory sharedInformerFactory = client.informers();
SharedIndexInformer<PodSet> podSetSharedIndexInformer = sharedInformerFactory.sharedIndexInformerForCustomResource(crdContext, PodSet.class, PodSetList.class, new OperationContext().withNamespace("ns1"), 60 * WATCH_EVENT_EMIT_TIME);
CountDownLatch foundExistingPodSet = new CountDownLatch(1);
podSetSharedIndexInformer.addEventHandler(
new ResourceEventHandler<PodSet>() {
@Override
public void onAdd(PodSet podSet) {
if (podSet.getMetadata().getName().equalsIgnoreCase("podset1")) {
foundExistingPodSet.countDown();
}
}
@Override
public void onUpdate(PodSet oldPodSet, PodSet newPodSet) { }
@Override
public void onDelete(PodSet podSet, boolean deletedFinalStateUnknown) { }
});
sharedInformerFactory.startAllRegisteredInformers();

@Test
	public void testInfoIDMRoles() throws InterruptedException, IOException {
KubernetesClient client=getClient();
		SharedInformerFactory sharedInformerFactory=client.informers();
		CustomResourceDefinitionContext CrdContext = new CustomResourceDefinitionContext.Builder()
				.withVersion("v1")
				.withScope("Namespaced")
				.withGroup("idm.fndsec.amdocs.com")
				.withPlural("PodSetes")
				.build();
		SharedIndexInformer<PodSet> podInformer=sharedInformerFactory.sharedIndexInformerForCustomResource(CrdContext, PodSet.class, PodSet.PodSetList.class,new CustomResourceOperationContext().withNamespace("default") ,60 * 1000L);
		podInformer.addEventHandler(
				new ResourceEventHandler<PodSet>() {
					@Override
					public void onAdd(PodSet pod) {
						logger.info("Pod " + pod.getMetadata().getName() + " got added");
					}

					@Override
					public void onUpdate(PodSet oldPod, PodSet newPod) {
						logger.info("Pod " + oldPod.getMetadata().getName() + " got updated");
					}

					@Override
					public void onDelete(PodSet pod, boolean deletedFinalStateUnknown) {
						logger.info("Pod " + pod.getMetadata().getName() + " got deleted");
					}
				}
		);
		
		sharedInformerFactory.startAllRegisteredInformers();
}

	static KubernetesClient getClient() {

		Config k8SConfig = new ConfigBuilder().withMasterUrl("****")
			.withTrustCerts(true)
			.withNamespace("default")
			.withClientCertData(
				"****")
			.withClientKeyData(
				"****")
			.build();

		return new DefaultKubernetesClient(k8SConfig).inNamespace("default");
	}

@bframke
Copy link

bframke commented Jul 20, 2020

Hello, we just now updated to version 1.6.0 of Quarkus which uses 4.10.2 of Fabric8 and we can see that it looks at all Namespaces and not the one its intended to look at.

@skpandey91
Copy link
Author

Hello, we just now updated to version 1.6.0 of Quarkus which uses 4.10.2 of Fabric8 and we can see that it looks at all Namespaces and not the one its intended to look at.

So is it a defect or feature ??

@bframke
Copy link

bframke commented Jul 20, 2020

I would call it a defect, because we are defining a namespace that should be used by the Informer and it is ignored.

@rohanKanojia
Copy link
Member

Hi, I'm not able to reproduce this issue. For me having three pods in my default namespace:

~/work/repos/jkube/quickstarts/maven/karaf-camel-log : $ kubectl get pods -ndefault
NAME                            READY   STATUS      RESTARTS   AGE
docker-registry-1-65cbp         1/1     Running     2          3d
persistent-volume-setup-5bsnl   0/1     Completed   0          3d
router-1-2xs2f                  1/1     Running     2          3d

When I run a simple Pod namespace informer for default namespace:

        try (KubernetesClient client = new DefaultKubernetesClient()) {
            SharedInformerFactory sharedInformerFactory = client.informers();
            SharedIndexInformer<Pod> podInformer = sharedInformerFactory.sharedIndexInformerFor(
                    Pod.class,
                    PodList.class,
                    new OperationContext().withNamespace("default"),
                    30 * 1000L);
            logger.info("Informer factory initialized.");

            podInformer.addEventHandler(
                    new ResourceEventHandler<Pod>() {
                        @Override
                        public void onAdd(Pod pod) {
                            logger.info("Pod " + pod.getMetadata().getName() + " got added");
                        }

                        @Override
                        public void onUpdate(Pod oldPod, Pod newPod) {
                            logger.info("Pod " + oldPod.getMetadata().getName() + " got updated");
                        }

                        @Override
                        public void onDelete(Pod pod, boolean deletedFinalStateUnknown) {
                            logger.info("Pod " + pod.getMetadata().getName() + " got deleted");
                        }
                    });

            logger.info("Starting all registered informers");
            sharedInformerFactory.startAllRegisteredInformers();

            // Wait for 1 minute
            Thread.sleep(15 * 60 * 1000L);
            sharedInformerFactory.stopAllRegisteredInformers();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            e.printStackTrace();
        }

I get this as output:

/usr/java/jdk-14.0.1/bin/java -javaagent:/opt/ideaIC-2019.3.3/idea-IC-193.6494.35/lib/idea_rt.jar=37767:/opt/ideaIC-2019.3.3/idea-IC-193.6494.35/bin -Dfile.encoding=UTF-8 -classpath /home/rohaan/work/repos/kubernetes-client-demo/target/classes:/home/rohaan/.m2/repository/io/fabric8/kubernetes-client/4.10-SNAPSHOT/kubernetes-client-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-core/4.10-SNAPSHOT/kubernetes-model-core-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-common/4.10-SNAPSHOT/kubernetes-model-common-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.10.3/jackson-module-jaxb-annotations-2.10.3.jar:/home/rohaan/.m2/repository/jakarta/xml/bind/jakarta.xml.bind-api/2.3.2/jakarta.xml.bind-api-2.3.2.jar:/home/rohaan/.m2/repository/jakarta/activation/jakarta.activation-api/1.2.1/jakarta.activation-api-1.2.1.jar:/home/rohaan/.m2/repository/javax/annotation/javax.annotation-api/1.3.2/javax.annotation-api-1.3.2.jar:/home/rohaan/.m2/repository/javax/xml/bind/jaxb-api/2.3.0/jaxb-api-2.3.0.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-rbac/4.10-SNAPSHOT/kubernetes-model-rbac-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-admissionregistration/4.10-SNAPSHOT/kubernetes-model-admissionregistration-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-apps/4.10-SNAPSHOT/kubernetes-model-apps-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-autoscaling/4.10-SNAPSHOT/kubernetes-model-autoscaling-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-apiextensions/4.10-SNAPSHOT/kubernetes-model-apiextensions-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-batch/4.10-SNAPSHOT/kubernetes-model-batch-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-certificates/4.10-SNAPSHOT/kubernetes-model-certificates-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-coordination/4.10-SNAPSHOT/kubernetes-model-coordination-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-discovery/4.10-SNAPSHOT/kubernetes-model-discovery-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-events/4.10-SNAPSHOT/kubernetes-model-events-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-extensions/4.10-SNAPSHOT/kubernetes-model-extensions-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-networking/4.10-SNAPSHOT/kubernetes-model-networking-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-metrics/4.10-SNAPSHOT/kubernetes-model-metrics-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-policy/4.10-SNAPSHOT/kubernetes-model-policy-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-scheduling/4.10-SNAPSHOT/kubernetes-model-scheduling-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-settings/4.10-SNAPSHOT/kubernetes-model-settings-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/kubernetes-model-storageclass/4.10-SNAPSHOT/kubernetes-model-storageclass-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/io/fabric8/openshift-model/4.10-SNAPSHOT/openshift-model-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/com/squareup/okhttp3/okhttp/3.12.12/okhttp-3.12.12.jar:/home/rohaan/.m2/repository/com/squareup/okio/okio/1.15.0/okio-1.15.0.jar:/home/rohaan/.m2/repository/com/squareup/okhttp3/logging-interceptor/3.12.12/logging-interceptor-3.12.12.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-yaml/2.10.3/jackson-dataformat-yaml-2.10.3.jar:/home/rohaan/.m2/repository/org/yaml/snakeyaml/1.24/snakeyaml-1.24.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.10.3/jackson-datatype-jsr310-2.10.3.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.10.3/jackson-annotations-2.10.3.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.3/jackson-databind-2.10.3.jar:/home/rohaan/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.10.3/jackson-core-2.10.3.jar:/home/rohaan/.m2/repository/io/fabric8/zjsonpatch/0.3.0/zjsonpatch-0.3.0.jar:/home/rohaan/.m2/repository/com/github/mifmif/generex/1.0.2/generex-1.0.2.jar:/home/rohaan/.m2/repository/dk/brics/automaton/automaton/1.11-8/automaton-1.11-8.jar:/home/rohaan/.m2/repository/io/fabric8/openshift-client/4.10-SNAPSHOT/openshift-client-4.10-SNAPSHOT.jar:/home/rohaan/.m2/repository/org/json/json/20190722/json-20190722.jar:/home/rohaan/.m2/repository/org/slf4j/slf4j-simple/1.7.28/slf4j-simple-1.7.28.jar:/home/rohaan/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar io.fabric8.NamespacedInformerDemo
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo main
INFO: Informer factory initialized.
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo main
INFO: Starting all registered informers
[informer-controller-Pod] INFO io.fabric8.kubernetes.client.informers.cache.Controller - informer#Controller: ready to run resync and reflector runnable
[informer-controller-Pod] INFO io.fabric8.kubernetes.client.informers.cache.Reflector - Started ReflectorRunnable watch for class io.fabric8.kubernetes.api.model.Pod
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onAdd
INFO: Pod docker-registry-1-65cbp got added
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onUpdate
INFO: Pod docker-registry-1-65cbp got updated
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onAdd
INFO: Pod persistent-volume-setup-5bsnl got added
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onUpdate
INFO: Pod persistent-volume-setup-5bsnl got updated
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onAdd
INFO: Pod router-1-2xs2f got added
Jul 20, 2020 8:19:40 PM io.fabric8.NamespacedInformerDemo$1 onUpdate
INFO: Pod router-1-2xs2f got updated

@rohanKanojia
Copy link
Member

What's more strange is that we have a test case already covering this scenario:

void testWithOperationContextArgumentForCustomResource() throws InterruptedException {
String startResourceVersion = "1000", endResourceVersion = "1001";
PodSetList podSetList = new PodSetList();
podSetList.setMetadata(new ListMetaBuilder().withResourceVersion(startResourceVersion).build());
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets")
.andReturn(200, podSetList).once();
server.expect().withPath("/apis/demo.k8s.io/v1alpha1/namespaces/ns1/podsets?resourceVersion=" + startResourceVersion + "&watch=true")
.andUpgradeToWebSocket()
.open()
.waitFor(WATCH_EVENT_EMIT_TIME)
.andEmit(new WatchEvent(getPodSet("podset1", endResourceVersion), "ADDED"))
.waitFor(OUTDATED_WATCH_EVENT_EMIT_TIME)
.andEmit(outdatedEvent).done().always();
KubernetesClient client = server.getClient();
CustomResourceDefinitionContext crdContext = new CustomResourceDefinitionContext.Builder()
.withVersion("v1alpha1")
.withScope("Namespaced")
.withGroup("demo.k8s.io")
.withPlural("podsets")
.build();
SharedInformerFactory sharedInformerFactory = client.informers();
SharedIndexInformer<PodSet> podSetSharedIndexInformer = sharedInformerFactory.sharedIndexInformerForCustomResource(crdContext, PodSet.class, PodSetList.class, new OperationContext().withNamespace("ns1"), 60 * WATCH_EVENT_EMIT_TIME);
CountDownLatch foundExistingPodSet = new CountDownLatch(1);
podSetSharedIndexInformer.addEventHandler(
new ResourceEventHandler<PodSet>() {
@Override
public void onAdd(PodSet podSet) {
if (podSet.getMetadata().getName().equalsIgnoreCase("podset1")) {
foundExistingPodSet.countDown();
}
}
@Override
public void onUpdate(PodSet oldPodSet, PodSet newPodSet) { }
@Override
public void onDelete(PodSet podSet, boolean deletedFinalStateUnknown) { }
});
sharedInformerFactory.startAllRegisteredInformers();
foundExistingPodSet.await(LATCH_AWAIT_PERIOD_IN_SECONDS, TimeUnit.SECONDS);
waitUntilResourceVersionSynced();
assertEquals(0, foundExistingPodSet.getCount());
assertEquals(endResourceVersion, podSetSharedIndexInformer.lastSyncResourceVersion());
sharedInformerFactory.stopAllRegisteredInformers();
}

Could anyone please share a reproducer project which we can try out and test?

@bframke
Copy link

bframke commented Jul 20, 2020

Then just deploy the pod you watch into another namespace or on more than two

@rohanKanojia
Copy link
Member

I do have pods running in other namespaces. This is my output with --all-namespaces argument:

~/work/repos/jkube/quickstarts/maven/docker-file-simple : $ oc get pods --all-namespaces
NAMESPACE                       NAME                                                      READY     STATUS              RESTARTS   AGE
default                         docker-registry-1-65cbp                                   1/1       Running             2          3d
default                         persistent-volume-setup-5bsnl                             0/1       Completed           0          3d
default                         router-1-2xs2f                                            1/1       Running             2          3d
kube-dns                        kube-dns-4d9nt                                            1/1       Running             2          3d
kube-proxy                      kube-proxy-cvtrh                                          1/1       Running             2          3d
kube-system                     kube-controller-manager-localhost                         1/1       Running             2          3d
kube-system                     kube-scheduler-localhost                                  1/1       Running             2          3d
kube-system                     master-api-localhost                                      1/1       Running             2          3d
kube-system                     master-etcd-localhost                                     1/1       Running             2          3d
myproject                       docker-file-simple-s2i-1-build                            0/1       Error               0          27s
myproject                       karaf-camel-log-7d84c79b4c-62cmw                          0/1       CrashLoopBackOff    4          2m
myproject                       karaf-camel-log-s2i-2-build                               0/1       Completed           0          5m
myproject                       xml-config-controller-b889999c4-hwvnn                     0/1       ContainerCreating   0          5m
myproject                       xml-config-controller-b889999c4-kqllk                     0/1       ContainerCreating   0          5m
openshift-apiserver             openshift-apiserver-6nd4w                                 1/1       Running             2          3d
openshift-controller-manager    openshift-controller-manager-69rbg                        1/1       Running             2          3d
openshift-core-operators        openshift-service-cert-signer-operator-6d477f986b-nqcn6   1/1       Running             2          3d
openshift-core-operators        openshift-web-console-operator-57986c9c4f-sjvjm           1/1       Running             2          3d
openshift-service-cert-signer   apiservice-cabundle-injector-8ffbbb6dc-9p2dp              1/1       Running             2          3d
openshift-service-cert-signer   service-serving-cert-signer-668c45d5f-dgrm2               1/1       Running             2          3d
openshift-web-console           webconsole-74646f4d98-qs85t                               1/1       Running             4          3d
~/work/repos/jkube/quickstarts/maven/docker-file-simple : $ 

@bframke
Copy link

bframke commented Jul 20, 2020

Okay, so I'am a using Custom Resource, but also a Cluster Role for the Service Account. The Kubernetes Client itself is set to a Namespace. Maybe you should try it out with a Custom Resource, maybe only that one has a problem in this case.

@rohanKanojia
Copy link
Member

ohk, I'm able to reproduce this. Let me try to fix it whenever I get time this week.

@rohanKanojia rohanKanojia self-assigned this Jul 23, 2020
@rohanKanojia
Copy link
Member

rohanKanojia commented Jul 23, 2020

ohk, now I'm able to see why informer was watching in all namespaces instead of a specified one. In 4.10.x versions we introduced a new interface called Namespaced:

Every resource in Kubernetes Model which is a namespace resource implements this interface, otherwise, it's considered to be a Cluster scoped resource.

Our test was passing because our PodSet class implements this interface:

public class PodSet extends CustomResource implements Namespaced {

I modified my CronTab POJO to implement this interface. After this informers started working as expected. You can check my test here.

Could you please check if adding making your Custom Resource classes implement this Namespaced interface solves your issue?

@rohanKanojia rohanKanojia removed the bug label Jul 23, 2020
@bframke
Copy link

bframke commented Jul 23, 2020

Okay, so just adding implements Namespaced to the CR helps in this case? Just asking if I understood it correctly.

Edit: reading helps :D Trying to test it now

@rohanKanojia
Copy link
Member

@bframke : Yes, I think so. I've tested and it seemed to work for me.

@rohanKanojia
Copy link
Member

Please let me know if it works for you. I'll update the docs so that it's clearly visible.

@bframke
Copy link

bframke commented Jul 23, 2020

Okay, it works fine. Just added it, had custom resources in other namespaces and they were not pulled in from the indexinformer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants