You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
we had ( or still have ) an issue on the EKS cluster where pods got stuck in the PodInitialization status when deploying.
In the log of the csi-secret-store-driver on the same k8s node, I found that it complains about a lack of Secretproviderclasses object, while it was created successfully.
At first, I tried to restart the csi driver on the node, but it starts failing with a memory violation error coming from one of the containers:
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0.
The only workaround found so far is to replace the node
What did you expect to happen:
At least the pod restart should be successful.
Anything else you would like to add:
it happened twice in the last 5 days, both times on nodes running >161 days.
maybe related to kubernetes-sigs/controller-runtime#1891
Which provider are you using:
Aws secretmanager
Environment:
Secrets Store CSI Driver version: (use the image tag):
k8s.gcr.io/csi-secrets-store/driver:v0.2.0
I would recommend upgrading to the latest supported version and reopening the issue if you still encounter the error.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What steps did you take and what happened:
we had ( or still have ) an issue on the EKS cluster where pods got stuck in the PodInitialization status when deploying.
In the log of the csi-secret-store-driver on the same k8s node, I found that it complains about a lack of Secretproviderclasses object, while it was created successfully.
At first, I tried to restart the csi driver on the node, but it starts failing with a memory violation error coming from one of the containers:
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0.
` panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x15a7746]
`
The only workaround found so far is to replace the node
What did you expect to happen:
At least the pod restart should be successful.
Anything else you would like to add:
it happened twice in the last 5 days, both times on nodes running >161 days.
maybe related to kubernetes-sigs/controller-runtime#1891
Which provider are you using:
Aws secretmanager
Environment:
k8s.gcr.io/csi-secrets-store/driver:v0.2.0
kubectl version
):v1.21.14-eks-fb459a0
The text was updated successfully, but these errors were encountered: