Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Kubernetes 1.25 - migrate off deprecated items #34479

Closed
howardjohn opened this issue Aug 2, 2021 · 15 comments
Closed

Support Kubernetes 1.25 - migrate off deprecated items #34479

howardjohn opened this issue Aug 2, 2021 · 15 comments
Assignees
Labels
area/environments lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed

Comments

@howardjohn
Copy link
Member

This version introduces a number of deprecations. We should attempt to migrate to these early if possible, to avoid version conflict pains.

This issue is tracking this effort; we need a more investigation before we have a concrete plan.

@howardjohn
Copy link
Member Author

#34240 is critical

@howardjohn
Copy link
Member Author

#32056

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Dec 23, 2021
@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Jan 7, 2022
@howardjohn howardjohn added lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed and removed lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. labels Jan 25, 2022
@howardjohn
Copy link
Member Author

Autoscaling v2 - autoscaling/v2beta1 HorizontalPodAutoscaler is deprecated in v1.22+, unavailable in v1.25+, added in 1.23 kubernetes/kubernetes#102534. So we need dual version support for sure.

Policy v1 policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+;. Added in 1.21 kubernetes/kubernetes#99290. So likely need dual version support

@howardjohn howardjohn reopened this Jan 25, 2022
@howardjohn
Copy link
Member Author

Once fixed revert #36996 (review)

@pupseba
Copy link

pupseba commented Jul 29, 2022

Hi,

Using version 1.14.2 of istioctl and running "istioctl manifest generate..." will generate the hpa resources for istio-pilot with "apiVersion: v2beta1" but with the specs of "v2beta2".

error validating data: ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): unknown field "target" in io.k8s.api.autoscaling.v2beta1.ResourceMetricSource

For ingress/egress, the hpa is also generated with "v2beta1" but it has the correct spects.

@howardjohn
Copy link
Member Author

Hi,

Using version 1.14.2 of istioctl and running "istioctl manifest generate..." will generate the hpa resources for istio-pilot with "apiVersion: v2beta1" but with the specs of "v2beta2".

error validating data: ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): unknown field "target" in io.k8s.api.autoscaling.v2beta1.ResourceMetricSource

For ingress/egress, the hpa is also generated with "v2beta1" but it has the correct spects.

Can you give more details?

---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  labels:
    app: istio-ingressgateway
    istio: ingressgateway
    release: istio
    istio.io/rev: default
    install.operator.istio.io/owning-resource: unknown
    operator.istio.io/component: "IngressGateways"
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: istio-ingressgateway
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: istiod
  namespace: istio-system
  labels:
    app: istiod
    release: istio
    istio.io/rev: default
    install.operator.istio.io/owning-resource: unknown
    operator.istio.io/component: "Pilot"
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: istiod
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80

Is the result for me

@pupseba
Copy link

pupseba commented Jul 29, 2022

It looks like this for us:

---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: istiod
    install.operator.istio.io/owning-resource: unknown
    istio.io/rev: 1-14-2
    operator.istio.io/component: Pilot
    release: istio
  name: istiod-1-14-2
  namespace: istio-system
spec:
  maxReplicas: 5
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: istiod-1-14-2

Same result either we use 1.14.1 or 1.14.2 version of istioctl to generate the manifest. We generate the manifests with: istioctl manifest generate -f istio-istioperatorinstall.yaml > istio-all-in-one.yaml

Maybe it is related to our version of kubernetes? Somehow I feel the level of details you were expecting was a bit bigger. Please let me know what things I could provide to help you have the info you need.

@howardjohn
Copy link
Member Author

howardjohn commented Jul 29, 2022 via email

@pupseba
Copy link

pupseba commented Aug 1, 2022

We tried testing the generate in other clusters with higher versions (up to 1.22.9-gke.1300 is as high as we were able to go) with same results :/ Below the not-so-short manifest.

IstioOperator
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  annotations:
  name: istiocontrolplane-1-14-2
  namespace: istio-system
spec:
  hub: docker.io/istio
  tag: 1.14.2
  profile: default
  revision: 1-14-2
  components:
    base:
      enabled: true
    cni:
      enabled: false
    egressGateways:
    - enabled: true
      k8s:
        hpaSpec:
          maxReplicas: 5
          metrics:
          - resource:
              name: cpu
              targetAverageUtilization: 80
            type: Resource
          minReplicas: 3
          scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: istio-egressgateway
        resources:
          requests:
            cpu: 200m
            memory: 40Mi
          limits:
            cpu: 2000m
            memory: 2000Mi
      name: istio-egressgateway
    ingressGateways:
    - enabled: true
      k8s:
        resources:
          limits:
            cpu: 2000m
            memory: 4096Mi
          requests:
            cpu: 200m
            memory: 128Mi
        service:
          type: ClusterIP
          ports:
          - name: status-port
            port: 15021
            targetPort: 15021
          - name: http2
            port: 80
            targetPort: 8080
          - name: https2
            port: 443
            targetPort: 8443
          - name: https
            port: 444
            targetPort: 8444
          - name: tls
            port: 15443
            targetPort: 15443
      label:
        app: istio-ingressgateway-kubeflow
        istio: ingressgateway-kubeflow
      name: istio-ingressgateway-kubeflow
    - enabled: true
      k8s:
        resources:
          limits:
            cpu: 2000m
            memory: 4096Mi
          requests:
            cpu: 200m
            memory: 128Mi
        service:
          type: ClusterIP
          ports:
          - name: status-port
            port: 15021
            targetPort: 15021
          - name: http2
            port: 80
            targetPort: 8080
          - name: https2
            port: 443
            targetPort: 8443
          - name: https
            port: 444
            targetPort: 8444
          - name: tls
            port: 15443
            targetPort: 15443
      label:
        app: istio-ingressgateway-knative
        istio: ingressgateway-knative
      name: istio-ingressgateway-knative
    - enabled: true
      k8s:
        hpaSpec:
          maxReplicas: 5
          metrics:
          - resource:
              name: cpu
              targetAverageUtilization: 80
            type: Resource
          minReplicas: 3
          scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: istio-ingressgateway
        resources:
          limits:
            cpu: 2000m
            memory: 4096Mi
          requests:
            cpu: 200m
            memory: 128Mi
        service:
          ports:
          - name: status-port
            port: 15021
            targetPort: 15021
          - name: http2
            port: 80
            targetPort: 8080
          - name: https2
            port: 443
            targetPort: 8443
          - name: https
            port: 444
            targetPort: 8444
          - name: tls
            port: 15443
            targetPort: 15443
          type: NodePort
      name: istio-ingressgateway
    pilot:
      enabled: true
      k8s:
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 3
          timeoutSeconds: 5
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
  meshConfig:
    defaultConfig:
      holdApplicationUntilProxyStarts: true
      tracing:
        zipkin:
          address: tracing-collector.istio-system:9411
    accessLogEncoding: JSON
    accessLogFile: /dev/stdout
    accessLogFormat: |
      {
        "authority": "%REQ(:AUTHORITY)%",
        "bytes_received": "%BYTES_RECEIVED%",
        "bytes_sent": "%BYTES_SENT%",
        "connection_termination_details": "%CONNECTION_TERMINATION_DETAILS%",
        "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
        "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
        "duration": "%DURATION%",
        "method": "%REQ(:METHOD)%",
        "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
        "protocol": "%PROTOCOL%",
        "request_id": "%REQ(X-REQUEST-ID)%",
        "requested_server_name": "%REQUESTED_SERVER_NAME%",
        "response_code": "%RESPONSE_CODE%",
        "response_code_details": "%RESPONSE_CODE_DETAILS%",
        "response_flags": "%RESPONSE_FLAGS%",
        "route_name": "%ROUTE_NAME%",
        "start_time": "%START_TIME%",
        "upstream_cluster": "%UPSTREAM_CLUSTER%",
        "upstream_host": "%UPSTREAM_HOST%",
        "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
        "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
        "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
        "user_agent": "%REQ(USER-AGENT)%",
        "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
        "traceID": "%REQ(x-b3-traceid)%"
      }
    enableAutoMtls: true
    enableEnvoyAccessLogService: false
    enablePrometheusMerge: true
    enableTracing: true
    outboundTrafficPolicy:
      mode: ALLOW_ANY
    extensionProviders: 
    - name: "oauth2-proxy"
      envoyExtAuthzHttp:
        service: "oauth2-proxy.istio-system.svc.cluster.local"
        port: "80" # The default port used by oauth2-proxy.
        includeHeadersInCheck: # headers sent to the oauth2-proxy in the check request.
            # https://github.com/oauth2-proxy/oauth2-proxy/issues/350#issuecomment-576949334
            - "cookie"
            - "x-forwarded-access-token"
            - "x-forwarded-user"
            - "x-forwarded-email"
            - "authorization"
            - "x-forwarded-proto"
            - "proxy-authorization"
            - "user-agent"
            - "x-forwarded-host"
            - "from"
            - "x-forwarded-for"
            - "accept"
        headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token", "x-auth-request-user-groups"] # headers sent to backend application when request is allowed.
        headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
  values:
    gateways:
      istio-egressgateway:
        autoscaleEnabled: true
      istio-ingressgateway:
        autoscaleEnabled: true
    global:
      jwtPolicy: third-party-jwt
      # sds:
      #   token:
      #     aud: https://kubernetes.default.svc.cluster.local,istio-ca
      imagePullPolicy: IfNotPresent
      logAsJson: true
      proxy:
        resources:
          requests:
            cpu: 10m
            memory: 40Mi
      # values.global.tracer.zipkin.address is deprecated; use meshConfig.defaultConfig.tracing.zipkin.address instead
      # tracer:
      #   zipkin:
      #     address: tracing-collector.istio-system:9411
    pilot:
      autoscaleEnabled: true
      autoscaleMax: 5
      autoscaleMin: 3    
      configMap: true
      configNamespace: istio-system
      cpu:
        targetAverageUtilization: 80
      enableProtocolSniffingForInbound: true
      enableProtocolSniffingForOutbound: true
      env: {}
      image: pilot
      keepaliveMaxServerConnectionAge: 30m
      nodeSelector: {}
      tolerations: []
      traceSampling: 100
    sidecarInjectorWebhook:
      enableNamespacesByDefault: false
      neverInjectSelector:
      - matchExpressions:
        - key: application
          operator: In
          values:
          - spilo
          - spilo-logical-backup
      objectSelector:
        autoInject: true
        enabled: false
      rewriteAppHTTPProbe: true
    telemetry:
      enabled: true
      v2:
        enabled: true
        metadataExchange:
          wasmEnabled: false
        prometheus:
          enabled: true
          wasmEnabled: false
        stackdriver:
          configOverride: {}
          enabled: false
          logging: false
          monitoring: false
          topology: false

@howardjohn
Copy link
Member Author

That's a giant config! That being said, I can reproduce. Thanks. Will investigate a fix.

I believe we have some detection of using legacy fields in the config, not based on the cluster version, which is why your config specifically cause it (for some TBD reason)

@howardjohn
Copy link
Member Author

More minimal repro:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  annotations:
  name: istiocontrolplane-1-14-2
  namespace: istio-system
spec:
  components:
    egressGateways:
    - enabled: true
      k8s:
        hpaSpec:
          maxReplicas: 5
          metrics:
          - resource:
              name: cpu
              targetAverageUtilization: 80
      name: istio-egressgateway
  values:
    pilot:
      cpu:
        targetAverageUtilization: 80

something about the targetAverageUtilization mapping gets things confused

@richardwxn richardwxn self-assigned this Aug 8, 2022
@richardwxn
Copy link
Contributor

I would take a look at it.

@richardwxn
Copy link
Contributor

looks like the issue is that targetAverageUtilization get mapped to the corresponding fields in v2beta2 but on top of the old v2beta1 template though, I would fix it

@jitapichab
Copy link

Hello some workaround respect with this bug ?

Thanks so much :) .. I'm currently try to install istio 1.17.6 in kubernetes 1.26 and have the same problem :(

@emerson-h
Copy link

@jitapichab We were able to work around the issue by removing the metrics section from the hpaSpec in our config.

@keithmattix keithmattix reopened this Nov 21, 2023
@linsun linsun changed the title Support Kubernetes 1.25 Support Kubernetes 1.25 - migrate off deprecated items Jan 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/environments lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Projects
None yet
Development

No branches or pull requests

7 participants