Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter: Add the EC2 Karpenter agent #803

Merged
merged 1 commit into from
Apr 17, 2023

Conversation

ecpullen
Copy link
Contributor

@ecpullen ecpullen commented Apr 6, 2023

Issue number:

N/A

Description of changes:

Adds support for launching nodes with karpenter.

Agent operations (creation):

  • Create aws config and setup environment
  • Creates a Karpenter node role and instance profile if they don't already exist (This allows the agent to work without IAM::CreateRole permissions)
  • Creates a policy for the Karpenter controller if it doesn't exist
  • Associates and IAM OIDC provider
  • Creates IAM service account role based on the created Karpenter controller policy named KarpenterControllerRole-<CLUSTER-NAME>
  • Adds tags to the clusters subnets and security groups so Karpenter knows where to launch instances
  • Adds the Karpenter role to the aws-auth ConfigMap of the cluster
  • Created a tainted node group for only Karpenter nodes to run on
  • Installs Karpenter
  • Creates a provisioner for launching Karpenter nodes
  • Creates a deployment to scale karpenter nodes
  • Marks the tainted node group to be NoSchedule to prevent sonobuoy from using those nodes

Agent operations (destruction):

  • Create aws config and setup environment
  • Remove taint from tainted node group so the nodes can be scheduled
  • Descale Karpenter nodes
  • Uninstall Karpenter
  • Delete tainted node group

Future Work:

  • Support custom userdata
  • Support custom block sizes
  • Improve resource cleanup (Some things are created and not cleaned up, but it does not effect the cluster)
  • Create a macro to simplify eksctl calls
  • Use kube-rs instead of kubectl for creating the provisioner
  • Add helm to testsys-tools

Testing done:

Patched the version of TestSys used in the bottlerocket monorepo and ran cargo make test using the new karpenter agent as the bottlerocket provider.

NAME                       TYPE        STATE        PASSED   SKIPPED   FAILED 
 k-test-3                   Resource    completed                              
 k-test-3-instances-lvqu    Resource    running                                
 k-test-3-quick             Test        pass         1        6972      0     
Logs
[2023-04-12T16:26:34Z INFO  resource_agent::agent] Initializing Agent
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Getting AWS secret
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating AWS config
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Writing cluster's kubeconfig to /local/cluster.kubeconfig
2023-04-12 16:26:34 [!]  failed to determine authenticator version, leaving API version as default v1alpha1: failed to parse versions: unable to parse first version "unversioned": Invalid character(s) found in major number "unversioned"
2023-04-12 16:26:34 [✔]  saved kubeconfig as "/local/cluster.kubeconfig"
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Getting the AWS account id
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Using account '334716814390'
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for KarpenterInstanceNodeRole
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] KarpenterInstanceProfile already exists
[2023-04-12T16:26:34Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for KarpenterControllerPolicy
[2023-04-12T16:26:35Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] KarpenterControllerPolicy already exists
[2023-04-12T16:26:35Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Adding associate-iam-oidc-provider to k-test-3
2023-04-12 16:26:35 [ℹ]  IAM Open ID Connect provider is already associated with cluster "k-test-3" in "us-west-2"
[2023-04-12T16:26:35Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating iamserviceaccount for k-test-3
2023-04-12 16:26:36 [ℹ]  1 existing iamserviceaccount(s) (karpenter/karpenter) will be excluded
2023-04-12 16:26:36 [ℹ]  1 iamserviceaccount (karpenter/karpenter) was excluded (based on the include/exclude rules)
2023-04-12 16:26:36 [!]  serviceaccounts in Kubernetes will not be created or modified, since the option --role-only is used
2023-04-12 16:26:36 [ℹ]  no tasks
[2023-04-12T16:26:36Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Adding Karpenter tags to subnets: [
        "subnet-020f0ec3b9012ed86",
        "subnet-0931f1fc97ea86d3a",
        "subnet-0cc67467d6b268299",
    ]
[2023-04-12T16:26:36Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Adding Karpenter tags to security group: [
        "sg-06544ccea7c4f3264",
    ]
[2023-04-12T16:26:36Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating K8s Client from cluster kubeconfig
[2023-04-12T16:26:36Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating iamidentitymapping for KarpenterInstanceNodeRole
2023-04-12 16:26:36 [ℹ]  checking arn arn:aws:iam::{account_id}:role/KarpenterInstanceNodeRole against entries in the auth ConfigMap
2023-04-12 16:26:36 [!]  found existing mappings with same arn "arn:aws:iam::{account_id}:role/KarpenterInstanceNodeRole" (which will be shadowed by your new mapping)
2023-04-12 16:26:36 [ℹ]  adding identity "arn:aws:iam::{account_id}:role/KarpenterInstanceNodeRole" to auth ConfigMap
[2023-04-12T16:26:36Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating tainted managed nodegroup
2023-04-12 16:26:36 [ℹ]  will use version 1.24 for new nodegroup(s) based on control plane version
2023-04-12 16:26:37 [ℹ]  nodegroup "tainted-nodegroup" will use "" [AmazonLinux2/1.24]
2023-04-12 16:26:38 [ℹ]  1 existing nodegroup(s) (ng-01ee2b5d) will be excluded
2023-04-12 16:26:38 [ℹ]  1 nodegroup (tainted-nodegroup) was included (based on the include/exclude rules)
2023-04-12 16:26:38 [ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "k-test-3"
2023-04-12 16:26:38 [ℹ]  
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "tainted-nodegroup" } } 
}
2023-04-12 16:26:38 [ℹ]  checking cluster stack for missing resources
2023-04-12 16:26:38 [ℹ]  cluster stack has all required resources
2023-04-12 16:26:38 [ℹ]  building managed nodegroup stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:26:39 [ℹ]  deploying stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:26:39 [ℹ]  waiting for CloudFormation stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:27:09 [ℹ]  waiting for CloudFormation stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:28:03 [ℹ]  waiting for CloudFormation stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:29:37 [ℹ]  waiting for CloudFormation stack "eksctl-k-test-3-nodegroup-tainted-nodegroup"
2023-04-12 16:29:37 [ℹ]  no tasks
2023-04-12 16:29:37 [✔]  created 0 nodegroup(s) in cluster "k-test-3"
2023-04-12 16:29:37 [ℹ]  nodegroup "tainted-nodegroup" has 2 node(s)
2023-04-12 16:29:37 [ℹ]  node "ip-192-168-48-209.us-west-2.compute.internal" is ready
2023-04-12 16:29:37 [ℹ]  node "ip-192-168-74-244.us-west-2.compute.internal" is ready
2023-04-12 16:29:37 [ℹ]  waiting for at least 2 node(s) to become ready in "tainted-nodegroup"
2023-04-12 16:29:37 [ℹ]  nodegroup "tainted-nodegroup" has 2 node(s)
2023-04-12 16:29:37 [ℹ]  node "ip-192-168-48-209.us-west-2.compute.internal" is ready
2023-04-12 16:29:37 [ℹ]  node "ip-192-168-74-244.us-west-2.compute.internal" is ready
2023-04-12 16:29:37 [✔]  created 1 managed nodegroup(s) in cluster "k-test-3"
2023-04-12 16:29:37 [ℹ]  checking security group configuration for all nodegroups
2023-04-12 16:29:37 [ℹ]  all nodegroups have up-to-date cloudformation templates
[2023-04-12T16:29:37Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Applying node taint and scaling nodegroup
[2023-04-12T16:29:38Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating helm template file
history.go:56: [debug] getting history for release karpenter
install.go:178: [debug] Original chart version: "v0.27.1"
Release "karpenter" does not exist. Installing it now.
install.go:195: [debug] CHART PATH: /root/.cache/helm/repository/karpenter-v0.27.1.tgz

client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD awsnodetemplates.karpenter.k8s.aws is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD provisioners.karpenter.sh is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 20 resource(s)
wait.go:48: [debug] beginning wait for 20 resources with timeout of 5m0s
ready.go:277: [debug] Deployment is not ready: karpenter/karpenter. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: karpenter/karpenter. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: karpenter/karpenter. 0 out of 1 expected pods are ready
NAME: karpenter
LAST DEPLOYED: Wed Apr 12 16:29:39 2023
NAMESPACE: karpenter
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::334716814390:role/KarpenterControllerRole-k-test-3
settings:
  aws:
    clusterEndpoint: https://A7384B60BE8385F406CB54C5A3A84402.gr7.us-west-2.eks.amazonaws.com
    clusterName: k-test-3
    defaultInstanceProfile: KarpenterInstanceProfile

COMPUTED VALUES:
additionalAnnotations: {}
additionalClusterRoleRules: []
additionalLabels: {}
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: karpenter.sh/provisioner-name
          operator: DoesNotExist
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app.kubernetes.io/instance: karpenter
          app.kubernetes.io/name: karpenter
      topologyKey: kubernetes.io/hostname
controller:
  env: []
  envFrom: []
  errorOutputPaths:
  - stderr
  extraVolumeMounts: []
  healthProbe:
    port: 8081
  image:
    digest: sha256:aa4c1b1dca9d928e4a04f63680ae14cc912bc1b02146e0d3d36c8cd873bbc1e9
    repository: public.ecr.aws/karpenter/controller
    tag: v0.27.1
  logEncoding: ""
  logLevel: ""
  metrics:
    port: 8080
  outputPaths:
  - stdout
  resources: {}
  securityContext: {}
  sidecarContainer: []
  sidecarVolumeMounts: []
dnsConfig: {}
dnsPolicy: Default
extraObjects: []
extraVolumes: []
fullnameOverride: ""
hostNetwork: false
imagePullPolicy: IfNotPresent
imagePullSecrets: []
logEncoding: console
logLevel: debug
nameOverride: ""
nodeSelector:
  kubernetes.io/os: linux
podAnnotations: {}
podDisruptionBudget:
  maxUnavailable: 1
  name: karpenter
podLabels: {}
podSecurityContext:
  fsGroup: 1000
priorityClassName: system-cluster-critical
replicas: 2
revisionHistoryLimit: 10
serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::334716814390:role/KarpenterControllerRole-k-test-3
  create: true
  name: ""
serviceMonitor:
  additionalLabels: {}
  enabled: false
  endpointConfig: {}
settings:
  aws:
    clusterEndpoint: https://A7384B60BE8385F406CB54C5A3A84402.gr7.us-west-2.eks.amazonaws.com
    clusterName: k-test-3
    defaultInstanceProfile: KarpenterInstanceProfile
    enableENILimitedPodDensity: true
    enablePodENI: false
    interruptionQueueName: ""
    isolatedVPC: false
    nodeNameConvention: ip-name
    tags: null
    vmMemoryOverheadPercent: 0.075
  batchIdleDuration: 1s
  batchMaxDuration: 10s
  featureGates:
    driftEnabled: false
strategy:
  rollingUpdate:
    maxUnavailable: 1
terminationGracePeriodSeconds: null
tolerations:
- key: CriticalAddonsOnly
  operator: Exists
topologySpreadConstraints:
- labelSelector:
    matchLabels:
      app.kubernetes.io/instance: karpenter
      app.kubernetes.io/name: karpenter
  maxSkew: 1
  topologyKey: topology.kubernetes.io/zone
  whenUnsatisfiable: ScheduleAnyway
webhook:
  logLevel: error
  port: 8443

HOOKS:
MANIFEST:
---
# Source: karpenter/templates/poddisruptionbudget.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: karpenter
  namespace: karpenter
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: karpenter
      app.kubernetes.io/instance: karpenter
---
# Source: karpenter/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: karpenter
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::334716814390:role/KarpenterControllerRole-k-test-3
---
# Source: karpenter/templates/secret-webhook-cert.yaml
apiVersion: v1
kind: Secret
metadata:
  name: karpenter-cert
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
data: {} # Injected by karpenter-webhook
---
# Source: karpenter/templates/configmap-logging.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: config-logging
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
data:
  # https://github.com/uber-go/zap/blob/aa3e73ec0896f8b066ddf668597a02f89628ee50/config.go
  zap-logger-config: |
    {
      "level": "debug",
      "development": false,
      "disableStacktrace": true,
      "disableCaller": true,
      "sampling": {
        "initial": 100,
        "thereafter": 100
      },
      "outputPaths": ["stdout"],
      "errorOutputPaths": ["stderr"],
      "encoding": "console",
      "encoderConfig": {
        "timeKey": "time",
        "levelKey": "level",
        "nameKey": "logger",
        "callerKey": "caller",
        "messageKey": "message",
        "stacktraceKey": "stacktrace",
        "levelEncoder": "capital",
        "timeEncoder": "iso8601"
      }
    }
  loglevel.webhook: "error"
---
# Source: karpenter/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: karpenter-global-settings
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
data:  
    "aws.clusterEndpoint": "https://A7384B60BE8385F406CB54C5A3A84402.gr7.us-west-2.eks.amazonaws.com"
    "aws.clusterName": "k-test-3"
    "aws.defaultInstanceProfile": "KarpenterInstanceProfile"
    "aws.enableENILimitedPodDensity": "true"
    "aws.enablePodENI": "false"
    "aws.interruptionQueueName": ""
    "aws.isolatedVPC": "false"
    "aws.nodeNameConvention": "ip-name"
    "aws.vmMemoryOverheadPercent": "0.075"
    "batchIdleDuration": "1s"
    "batchMaxDuration": "10s"
    "featureGates.driftEnabled": "false"
---
# Source: karpenter/templates/aggregate-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: karpenter-admin
  labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
rules:
  - apiGroups: ["karpenter.sh"]
    resources: ["provisioners", "provisioners/status"]
    verbs: ["get", "list", "watch", "create", "delete", "patch"]
  - apiGroups: ["karpenter.k8s.aws"]
    resources: ["awsnodetemplates"]
    verbs: ["get", "list", "watch", "create", "delete", "patch"]
---
# Source: karpenter/templates/clusterrole-core.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: karpenter-core
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
rules:
  # Read
  - apiGroups: ["karpenter.sh"]
    resources: ["provisioners", "provisioners/status", "machines", "machines/status"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["pods", "nodes", "persistentvolumes", "persistentvolumeclaims", "replicationcontrollers", "namespaces"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["apps"]
    resources: ["daemonsets", "deployments", "replicasets", "statefulsets"]
    verbs: ["list", "watch"]
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
    verbs: ["get", "watch", "list"]
  - apiGroups: [ "policy" ]
    resources: [ "poddisruptionbudgets" ]
    verbs: [ "get", "list", "watch" ]
  # Write
  - apiGroups: ["karpenter.sh"]
    resources: ["provisioners/status", "machines", "machines/status"]
    verbs: ["create", "delete", "patch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["create", "patch", "delete"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["validatingwebhookconfigurations"]
    verbs: ["update"]
    resourceNames: ["validation.webhook.karpenter.sh", "validation.webhook.config.karpenter.sh"]
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["mutatingwebhookconfigurations"]
    verbs: ["update"]
    resourceNames: ["defaulting.webhook.karpenter.sh"]
---
# Source: karpenter/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
rules:
  # Read
  - apiGroups: ["karpenter.k8s.aws"]
    resources: ["awsnodetemplates"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["validatingwebhookconfigurations"]
    verbs: ["update"]
    resourceNames: ["validation.webhook.karpenter.k8s.aws"]
  - apiGroups: ["admissionregistration.k8s.io"]
    resources: ["mutatingwebhookconfigurations"]
    verbs: ["update"]
    resourceNames: ["defaulting.webhook.karpenter.k8s.aws"]
  # Write
  - apiGroups: ["karpenter.k8s.aws"]
    resources: ["awsnodetemplates/status"]
    verbs: ["patch", "update"]
---
# Source: karpenter/templates/clusterrole-core.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: karpenter-core
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: karpenter-core
subjects:
  - kind: ServiceAccount
    name: karpenter
    namespace: karpenter
---
# Source: karpenter/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: karpenter
subjects:
  - kind: ServiceAccount
    name: karpenter
    namespace: karpenter
---
# Source: karpenter/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: karpenter
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
rules:
  # Read
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch"]
  - apiGroups: [""]
    resources: ["configmaps", "namespaces", "secrets"]
    verbs: ["get", "list", "watch"]
  # Write
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["update"]
    resourceNames: ["karpenter-cert"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["update", "patch", "delete"]
    resourceNames:
      - karpenter-global-settings
      - config-logging
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["patch", "update"]
    resourceNames:
      - "karpenter-leader-election"
      - "webhook.configmapwebhook.00-of-01"
      - "webhook.defaultingwebhook.00-of-01"
      - "webhook.validationwebhook.00-of-01"
      - "webhook.webhookcertificates.00-of-01"
  # Cannot specify resourceNames on create
  # https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create"]
---
# Source: karpenter/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: karpenter-dns
  namespace: kube-system
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
rules:
  # Read
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["get"]
---
# Source: karpenter/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: karpenter
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: karpenter
subjects:
  - kind: ServiceAccount
    name: karpenter
    namespace: karpenter
---
# Source: karpenter/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: karpenter-dns
  namespace: kube-system
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: karpenter-dns
subjects:
  - kind: ServiceAccount
    name: karpenter
    namespace: karpenter
---
# Source: karpenter/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: karpenter
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - name: http-metrics
      port: 8080
      targetPort: http-metrics
      protocol: TCP
    - name: https-webhook
      port: 443
      targetPort: https-webhook
      protocol: TCP
  selector:
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
---
# Source: karpenter/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: karpenter
  namespace: karpenter
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 2
  revisionHistoryLimit: 10
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: karpenter
      app.kubernetes.io/instance: karpenter
  template:
    metadata:
      labels:
        app.kubernetes.io/name: karpenter
        app.kubernetes.io/instance: karpenter
      annotations:
        checksum/settings: 9072d07c9b9d9850d38abcab0ed20c3bf610a940f024d5d5575f1ce92486f4cd
    spec:
      serviceAccountName: karpenter
      securityContext:
        fsGroup: 1000
      priorityClassName: "system-cluster-critical"
      dnsPolicy: Default
      containers:
        - name: controller
          image: public.ecr.aws/karpenter/controller:v0.27.1@sha256:aa4c1b1dca9d928e4a04f63680ae14cc912bc1b02146e0d3d36c8cd873bbc1e9
          imagePullPolicy: IfNotPresent
          env:
            - name: KUBERNETES_MIN_VERSION
              value: "1.19.0-0"
            - name: KARPENTER_SERVICE
              value: karpenter
            - name: WEBHOOK_PORT
              value: "8443"
            - name: METRICS_PORT
              value: "8080"
            - name: HEALTH_PROBE_PORT
              value: "8081"
            - name: SYSTEM_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MEMORY_LIMIT
              valueFrom:
                resourceFieldRef:
                  containerName: controller
                  divisor: "0"
                  resource: limits.memory
          ports:
            - name: http-metrics
              containerPort: 8080
              protocol: TCP
            - name: http
              containerPort: 8081
              protocol: TCP
            - name: https-webhook
              containerPort: 8443
              protocol: TCP
          livenessProbe:
            initialDelaySeconds: 30
            timeoutSeconds: 30
            httpGet:
              path: /healthz
              port: http
          readinessProbe:
            timeoutSeconds: 30
            httpGet:
              path: /readyz
              port: http
      nodeSelector:
        kubernetes.io/os: linux
      # The template below patches the .Values.affinity to add a default label selector where not specificed
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: karpenter.sh/provisioner-name
                operator: DoesNotExist
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app.kubernetes.io/instance: karpenter
                app.kubernetes.io/name: karpenter
            topologyKey: kubernetes.io/hostname
      # The template below patches the .Values.topologySpreadConstraints to add a default label selector where not specificed
      topologySpreadConstraints:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/instance: karpenter
              app.kubernetes.io/name: karpenter
          maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
      tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
---
# Source: karpenter/templates/webhooks.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: defaulting.webhook.karpenter.k8s.aws
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
webhooks:
  - name: defaulting.webhook.karpenter.k8s.aws
    admissionReviewVersions: ["v1"]
    clientConfig:
      service:
        name: karpenter
        namespace: karpenter
    failurePolicy: Fail
    sideEffects: None
    rules:
      - apiGroups:
          - karpenter.k8s.aws
        apiVersions:
          - v1alpha1
        operations:
          - CREATE
          - UPDATE
        resources:
          - awsnodetemplates
          - awsnodetemplates/status
        scope: '*'
      - apiGroups:
          - karpenter.sh
        apiVersions:
          - v1alpha5
        resources:
          - provisioners
          - provisioners/status
        operations:
          - CREATE
          - UPDATE
---
# Source: karpenter/templates/webhooks-core.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validation.webhook.karpenter.sh
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
webhooks:
  - name: validation.webhook.karpenter.sh
    admissionReviewVersions: ["v1"]
    clientConfig:
      service:
        name: karpenter
        namespace: karpenter
    failurePolicy: Fail
    sideEffects: None
    rules:
      - apiGroups:
          - karpenter.sh
        apiVersions:
          - v1alpha5
        resources:
          - provisioners
          - provisioners/status
        operations:
          - CREATE
          - UPDATE
---
# Source: karpenter/templates/webhooks-core.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validation.webhook.config.karpenter.sh
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
webhooks:
  - name: validation.webhook.config.karpenter.sh
    admissionReviewVersions: ["v1"]
    clientConfig:
      service:
        name: karpenter
        namespace: karpenter
    failurePolicy: Fail
    sideEffects: None
    objectSelector:
      matchLabels:
        app.kubernetes.io/part-of: karpenter
---
# Source: karpenter/templates/webhooks.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validation.webhook.karpenter.k8s.aws
  labels:
    helm.sh/chart: karpenter-v0.27.1
    app.kubernetes.io/name: karpenter
    app.kubernetes.io/instance: karpenter
    app.kubernetes.io/version: "0.27.1"
    app.kubernetes.io/managed-by: Helm
webhooks:
  - name: validation.webhook.karpenter.k8s.aws
    admissionReviewVersions: ["v1"]
    clientConfig:
      service:
        name: karpenter
        namespace: karpenter
    failurePolicy: Fail
    sideEffects: None
    rules:
      - apiGroups:
          - karpenter.k8s.aws
        apiVersions:
          - v1alpha1
        operations:
          - CREATE
          - UPDATE
        resources:
          - awsnodetemplates
          - awsnodetemplates/status
        scope: '*'
      - apiGroups:
          - karpenter.sh
        apiVersions:
          - v1alpha5
        resources:
          - provisioners
          - provisioners/status
        operations:
          - CREATE
          - UPDATE

[2023-04-12T16:29:45Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Karpenter has been installed to the cluster. Creating EC2 provisioner
provisioner.karpenter.sh/default unchanged
awsnodetemplate.karpenter.k8s.aws/my-provider unchanged
[2023-04-12T16:29:46Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Creating deployment to scale karpenter nodes
[2023-04-12T16:29:46Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Waiting for new nodes to be created
[2023-04-12T16:29:46Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for more than 2 nodes in the cluster
[2023-04-12T16:29:46Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Found '2' nodes
[2023-04-12T16:29:51Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Found '2' nodes
[2023-04-12T16:29:56Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Found '2' nodes
[2023-04-12T16:30:01Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Found '2' nodes
[2023-04-12T16:30:06Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Expected node count has been reached
[2023-04-12T16:30:06Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Waiting for tainted nodegroup to become active
[2023-04-12T16:30:06Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:11Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:16Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:21Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:27Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:32Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:37Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:42Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:47Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:52Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:30:57Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:31:03Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:31:08Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:31:13Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is currently 'Updating'. Sleeping 5s
[2023-04-12T16:31:18Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] The nodegroup 'tainted-nodegroup' is now active
[2023-04-12T16:31:18Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Making tainted nodegroup unschedulable
[2023-04-12T16:31:18Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Waiting for nodes to be tainted
[2023-04-12T16:31:18Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:18Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:23Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:23Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:28Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:28Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:33Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:33Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:38Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:38Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:43Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:43Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:48Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:48Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:53Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:53Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:31:58Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:31:58Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:03Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:03Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:08Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:08Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:13Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:14Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:19Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:19Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:24Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:24Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] '0' of '2' nodes have the sonobuoy=ignore:NoSchedule taint. Sleeping 5s
[2023-04-12T16:32:29Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] Checking for tainted nodes in the cluster
[2023-04-12T16:32:29Z INFO  ec2_karpenter_resource_agent::ec2_karpenter_provider] All nodes have the new taint
[2023-04-12T16:32:29Z INFO  resource_agent::agent] Resource action succeeded.
[2023-04-12T16:32:29Z INFO  resource_agent::agent] 'keep_running' is true.
**Terms of contribution:**

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@ecpullen ecpullen requested review from webern and jpmcb April 6, 2023 17:52
@ecpullen ecpullen force-pushed the karpenter branch 2 times, most recently from 95d65a4 to 0651a00 Compare April 6, 2023 18:13
pub struct Ec2KarpenterDestroyer {}

#[async_trait::async_trait]
impl Destroy for Ec2KarpenterDestroyer {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this clean up everything? Or does it leave some things behind?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It cleans up all of the conflicting things. You can run this agent with the regular ec2 agent and everything works. In the future work section (which will become new issues) I've included improving cleanup.

@@ -77,6 +77,7 @@ where
.args(k8s_image_arg)
.args(e2e_repo_arg)
.args(sonobuoy_image_arg)
.arg("--plugin-env=e2e.E2E_EXTRA_ARGS=--non-blocking-taints=sonobuoy")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. What does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For normal sonobuoy testing, nothing. For karpenter testing it allows nodes in the cluster to have the NoSchedule taint effect. The taint prevents sonobuoy nodes from being scheduled on the AL tainted nodegroup nodes.

Dockerfile Outdated
Comment on lines 72 to 71
RUN yum install -y git make tar \
&& curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod +x get_helm.sh && ./get_helm.sh --version v3.8.2
# Copy eksctl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check a sha, or create an issue where we need to check shas for all of the things we download from various places in this dockerfile.

Suggested change
RUN yum install -y git make tar \
&& curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod +x get_helm.sh && ./get_helm.sh --version v3.8.2
# Copy eksctl
RUN yum install -y git make tar \
&& curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod +x get_helm.sh && ./get_helm.sh --version v3.8.2
# Copy eksctl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am working on fixing that.

@ecpullen ecpullen force-pushed the karpenter branch 2 times, most recently from f6a0ac2 to 13b4a27 Compare April 10, 2023 19:43
@ecpullen ecpullen force-pushed the karpenter branch 3 times, most recently from ad825c6 to c118c1c Compare April 12, 2023 16:18
Copy link
Contributor

@stmcginnis stmcginnis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Dockerfile Outdated
# Builds the EC2 karpenter resource agent image
FROM public.ecr.aws/amazonlinux/amazonlinux:2 as ec2-karpenter-resource-agent

RUN yum install -y git make tar
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you do an update, you can keep the image size a little smaller if you add && yum -y clean all && rm -fr /var/cache to the end here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
RUN yum install -y git make tar
RUN yum install -y git-core make tar

You can also save some space with the git core sub-package.

@ecpullen ecpullen force-pushed the karpenter branch 4 times, most recently from e081caf to 2e0f246 Compare April 17, 2023 19:31
@ecpullen
Copy link
Contributor Author

2e0f246 -> Add block mapping configuration and instance type configuration. Fix issue where arm amis couldn't be launched.
4a98b35 -> Rebase to develop

@ecpullen ecpullen merged commit 4e24d48 into bottlerocket-os:develop Apr 17, 2023
4 checks passed
@ecpullen ecpullen deleted the karpenter branch April 17, 2023 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants