New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Kubernetes 1.25 - migrate off deprecated items #34479
Comments
#34240 is critical |
Autoscaling v2 - Policy v1 |
Once fixed revert #36996 (review) |
Hi, Using version 1.14.2 of istioctl and running "istioctl manifest generate..." will generate the hpa resources for istio-pilot with "apiVersion: v2beta1" but with the specs of "v2beta2".
For ingress/egress, the hpa is also generated with "v2beta1" but it has the correct spects. |
Can you give more details?
Is the result for me |
It looks like this for us:
Same result either we use 1.14.1 or 1.14.2 version of istioctl to generate the manifest. We generate the manifests with: Maybe it is related to our version of kubernetes? Somehow I feel the level of details you were expecting was a bit bigger. Please let me know what things I could provide to help you have the info you need. |
whats in istio-istioperatorinstall.yaml? I think 'generate' doesn't use
cluster version
…On Fri, Jul 29, 2022 at 8:40 AM Sebastián Greco ***@***.***> wrote:
It looks like this for us:
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: istiod
install.operator.istio.io/owning-resource: unknown
istio.io/rev: 1-14-2
operator.istio.io/component: Pilot
release: istio
name: istiod-1-14-2
namespace: istio-system
spec:
maxReplicas: 5
metrics:
- resource:
name: cpu
target:
averageUtilization: 80
type: Utilization
type: Resource
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istiod-1-14-2
Same result either we use 1.14.1 or 1.14.2 version of istioctl to generate
the manifest. We generate the manifests with: istioctl manifest generate
-f istio-istioperatorinstall.yaml > istio-all-in-one.yaml
Maybe it is related to our version of kubernetes? Somehow I feel the level
of details you were expecting was a bit bigger. Please let me know what
things I could provide to help you have the info you need.
—
Reply to this email directly, view it on GitHub
<#34479 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEYGXP4TIHLEZJPYX67OATVWP3PZANCNFSM5BNBNGSQ>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
We tried testing the generate in other clusters with higher versions (up to 1.22.9-gke.1300 is as high as we were able to go) with same results :/ Below the not-so-short manifest. IstioOperatorapiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
annotations:
name: istiocontrolplane-1-14-2
namespace: istio-system
spec:
hub: docker.io/istio
tag: 1.14.2
profile: default
revision: 1-14-2
components:
base:
enabled: true
cni:
enabled: false
egressGateways:
- enabled: true
k8s:
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-egressgateway
resources:
requests:
cpu: 200m
memory: 40Mi
limits:
cpu: 2000m
memory: 2000Mi
name: istio-egressgateway
ingressGateways:
- enabled: true
k8s:
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 200m
memory: 128Mi
service:
type: ClusterIP
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
- name: https2
port: 443
targetPort: 8443
- name: https
port: 444
targetPort: 8444
- name: tls
port: 15443
targetPort: 15443
label:
app: istio-ingressgateway-kubeflow
istio: ingressgateway-kubeflow
name: istio-ingressgateway-kubeflow
- enabled: true
k8s:
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 200m
memory: 128Mi
service:
type: ClusterIP
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
- name: https2
port: 443
targetPort: 8443
- name: https
port: 444
targetPort: 8444
- name: tls
port: 15443
targetPort: 15443
label:
app: istio-ingressgateway-knative
istio: ingressgateway-knative
name: istio-ingressgateway-knative
- enabled: true
k8s:
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 200m
memory: 128Mi
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
- name: https2
port: 443
targetPort: 8443
- name: https
port: 444
targetPort: 8444
- name: tls
port: 15443
targetPort: 15443
type: NodePort
name: istio-ingressgateway
pilot:
enabled: true
k8s:
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 5
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
meshConfig:
defaultConfig:
holdApplicationUntilProxyStarts: true
tracing:
zipkin:
address: tracing-collector.istio-system:9411
accessLogEncoding: JSON
accessLogFile: /dev/stdout
accessLogFormat: |
{
"authority": "%REQ(:AUTHORITY)%",
"bytes_received": "%BYTES_RECEIVED%",
"bytes_sent": "%BYTES_SENT%",
"connection_termination_details": "%CONNECTION_TERMINATION_DETAILS%",
"downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
"downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
"duration": "%DURATION%",
"method": "%REQ(:METHOD)%",
"path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
"protocol": "%PROTOCOL%",
"request_id": "%REQ(X-REQUEST-ID)%",
"requested_server_name": "%REQUESTED_SERVER_NAME%",
"response_code": "%RESPONSE_CODE%",
"response_code_details": "%RESPONSE_CODE_DETAILS%",
"response_flags": "%RESPONSE_FLAGS%",
"route_name": "%ROUTE_NAME%",
"start_time": "%START_TIME%",
"upstream_cluster": "%UPSTREAM_CLUSTER%",
"upstream_host": "%UPSTREAM_HOST%",
"upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
"upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
"upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
"user_agent": "%REQ(USER-AGENT)%",
"x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
"traceID": "%REQ(x-b3-traceid)%"
}
enableAutoMtls: true
enableEnvoyAccessLogService: false
enablePrometheusMerge: true
enableTracing: true
outboundTrafficPolicy:
mode: ALLOW_ANY
extensionProviders:
- name: "oauth2-proxy"
envoyExtAuthzHttp:
service: "oauth2-proxy.istio-system.svc.cluster.local"
port: "80" # The default port used by oauth2-proxy.
includeHeadersInCheck: # headers sent to the oauth2-proxy in the check request.
# https://github.com/oauth2-proxy/oauth2-proxy/issues/350#issuecomment-576949334
- "cookie"
- "x-forwarded-access-token"
- "x-forwarded-user"
- "x-forwarded-email"
- "authorization"
- "x-forwarded-proto"
- "proxy-authorization"
- "user-agent"
- "x-forwarded-host"
- "from"
- "x-forwarded-for"
- "accept"
headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token", "x-auth-request-user-groups"] # headers sent to backend application when request is allowed.
headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
values:
gateways:
istio-egressgateway:
autoscaleEnabled: true
istio-ingressgateway:
autoscaleEnabled: true
global:
jwtPolicy: third-party-jwt
# sds:
# token:
# aud: https://kubernetes.default.svc.cluster.local,istio-ca
imagePullPolicy: IfNotPresent
logAsJson: true
proxy:
resources:
requests:
cpu: 10m
memory: 40Mi
# values.global.tracer.zipkin.address is deprecated; use meshConfig.defaultConfig.tracing.zipkin.address instead
# tracer:
# zipkin:
# address: tracing-collector.istio-system:9411
pilot:
autoscaleEnabled: true
autoscaleMax: 5
autoscaleMin: 3
configMap: true
configNamespace: istio-system
cpu:
targetAverageUtilization: 80
enableProtocolSniffingForInbound: true
enableProtocolSniffingForOutbound: true
env: {}
image: pilot
keepaliveMaxServerConnectionAge: 30m
nodeSelector: {}
tolerations: []
traceSampling: 100
sidecarInjectorWebhook:
enableNamespacesByDefault: false
neverInjectSelector:
- matchExpressions:
- key: application
operator: In
values:
- spilo
- spilo-logical-backup
objectSelector:
autoInject: true
enabled: false
rewriteAppHTTPProbe: true
telemetry:
enabled: true
v2:
enabled: true
metadataExchange:
wasmEnabled: false
prometheus:
enabled: true
wasmEnabled: false
stackdriver:
configOverride: {}
enabled: false
logging: false
monitoring: false
topology: false |
That's a giant config! That being said, I can reproduce. Thanks. Will investigate a fix. I believe we have some detection of using legacy fields in the config, not based on the cluster version, which is why your config specifically cause it (for some TBD reason) |
More minimal repro: apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
annotations:
name: istiocontrolplane-1-14-2
namespace: istio-system
spec:
components:
egressGateways:
- enabled: true
k8s:
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
name: istio-egressgateway
values:
pilot:
cpu:
targetAverageUtilization: 80 something about the targetAverageUtilization mapping gets things confused |
I would take a look at it. |
looks like the issue is that targetAverageUtilization get mapped to the corresponding fields in v2beta2 but on top of the old v2beta1 template though, I would fix it |
Hello some workaround respect with this bug ? Thanks so much :) .. I'm currently try to install istio 1.17.6 in kubernetes 1.26 and have the same problem :( |
@jitapichab We were able to work around the issue by removing the metrics section from the hpaSpec in our config. |
This version introduces a number of deprecations. We should attempt to migrate to these early if possible, to avoid version conflict pains.
This issue is tracking this effort; we need a more investigation before we have a concrete plan.
The text was updated successfully, but these errors were encountered: