New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(aws_eks): (EKS cluster_logging property makes cluster unmanageable) #20779
Comments
This is bad. I'm marking this as a But for anyone interested in picking this up, here's my hypothesis: the update-cluster-config method allows you to pass parameters for VPC configuration and logging, but not both at the same time (even though this is not spelled out in the docs). And that's exactly what the custom resource is trying to do when you update access and there is already a logging configuration. We should probably split this condition in two:
one for the update logging and one for update access. |
I'll do some research on this. |
Hey @watany-dev, @TheRealAmazonKendra |
With #22957 merged, I think this issue should be fixed. Can you verify again? |
This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled. |
@pahud
Step 3 produces error similar to below: cdk version: 2.66.1 |
reopening this issue as I can reproduce it with the code below: import { App, Stack, StackProps,
aws_eks as eks,
aws_ec2 as ec2 } from 'aws-cdk-lib';
import { KubectlV24Layer as KubectlLayer } from '@aws-cdk/lambda-layer-kubectl-v24';
import { Construct } from 'constructs';
export class EksTsStack extends Stack {
constructor(scope: Construct, id: string, props: StackProps = {}) {
super(scope, id, props);
const vpc = ec2.Vpc.fromLookup(this, 'Vpc', { isDefault: true });
const cluster = new eks.Cluster(this, 'Cluster', {
vpc,
version: eks.KubernetesVersion.V1_24,
kubectlLayer: new KubectlLayer(this, 'KubectlLayer'),
// clusterLogging: [
// eks.ClusterLoggingTypes.AUDIT,
// ],
})
}
} |
I think the bug could be below: When the logging is changed from string 'true' to undefinied, we probably should set aws-cdk/packages/@aws-cdk/aws-eks/lib/cluster-resource-handler/cluster.ts Lines 295 to 297 in 9a07ab0
I am leaving this a p2 bug. Any PR submission would be highly appreciated! |
I have created a PR #24688 for this. |
|
Describe the bug
EKS cluster built with cluster_logging property set prevents the cluster from being modified. In my particular tests, I can't add or delete CIDRs to/from endpoint_access cluster property.
Expected Behavior
cluster_logging should not prevent cluster settings from being modified.
Current Behavior
cluster_logging makes cluster unmanageable
Reproduction Steps
Create EKS cluster using CDK code:
Now try to add another CIDR to endpoint_access list:
CDK deploy command reports an error:
Moreover, now when you try to remove cluster_logging property, you'll get below error:
Combination of above bugs effectively makes your cluster unmanageable. You can't make changes to the cluster nor you can remove logging setting. I did not find a way out of this loop.
Possible Solution
Only solution is not to use cluster_logging. But once you did there is no way out.
Additional Information/Context
No response
CDK CLI Version
2.28.0 (build ba233f0)
Framework Version
No response
Node.js Version
v16.15.0
OS
MacOS 12.4
Language
Python
Language Version
Python 3.9.12
Other information
No response
The text was updated successfully, but these errors were encountered: