Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

工作负载-编辑设置-容器-默认带出的是docker镜像地址 #4129

Open
fffguo opened this issue May 18, 2023 · 2 comments
Open

工作负载-编辑设置-容器-默认带出的是docker镜像地址 #4129

fffguo opened this issue May 18, 2023 · 2 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@fffguo
Copy link

fffguo commented May 18, 2023

General remarks

This form is to report bugs. For general usage questions refer to our Slack channel
KubeSphere-users

Describe the bug
工作负载-编辑设置-容器-默认带出的是docker镜像地址,
实际情况是我使用了私服地址,每次变更小配置,都得再手动替换掉dockerhub

kind: Deployment
apiVersion: apps/v1
metadata:
  name: cynray-micro-tool
  namespace: cynray-dev
  labels:
    app: cynray-micro-tool
    app.kubernetes.io/name: cynray
    app.kubernetes.io/version: v1
    argocd.argoproj.io/instance: cynray-micro-tool-dev
    version: v1
  annotations:
    deployment.kubernetes.io/revision: '8'
    gitea: 'http://git.cynray.com/cynray-backend/cynray-micro-tool.git'
    kubesphere.io/alias-name: 工具中心
    kubesphere.io/creator: argocd
    kubesphere.io/description: 后端-开发环境-工具中心
    servicemesh.kubesphere.io/enabled: 'true'
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cynray-micro-tool
      app.kubernetes.io/name: cynray
      app.kubernetes.io/version: v1
      version: v1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: cynray-micro-tool
        app.kubernetes.io/name: cynray
        app.kubernetes.io/version: v1
        version: v1
      annotations:
        cni.projectcalico.org/ipv4pools: '["default-ipv4-ippool"]'
        gitea: 'http://git.cynray.com/cynray-backend/cynray-micro-tool.git'
        kubesphere.io/alias-name: 工具中心
        kubesphere.io/creator: argocd
        kubesphere.io/description: 后端-开发环境-工具中心
        kubesphere.io/restartedAt: '2023-05-18T08:28:43.860Z'
        sidecar.istio.io/inject: 'true'
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: config
          configMap:
            name: cynray-micro-tool-4tb79fcb8b
            items:
              - key: application.yml
                path: application.yml
            defaultMode: 420
      containers:
        - name: cynray-micro-tool
          image: 'nexus.cynray.com:8082/cynray/cynray-micro-tool:1.0.0-SNAPSHOT'
          ports:
            - name: tcp-20880
              containerPort: 20880
              protocol: TCP
            - name: tcp-9090
              containerPort: 9090
              protocol: TCP
            - name: tcp-9000
              containerPort: 9000
              protocol: TCP
            - name: tcp-8080
              containerPort: 8080
              protocol: TCP
          env:
            - name: JVM
              value: '-Xms64m -Xmx128m'
            - name: SPRING_LOCATION
              value: /home/config/application.yml
          resources: {}
          volumeMounts:
            - name: config
              readOnly: true
              mountPath: /home/config/application.yml
              subPath: application.yml
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - |-
                  curl localhost:8080/health/ping
                  grpcurl -plaintext localhost:20880 grpc.health.v1.Health/Check
            initialDelaySeconds: 60
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - |-
                  curl localhost:8080/health/ping
                  grpcurl -plaintext localhost:20880 grpc.health.v1.Health/Check
            initialDelaySeconds: 30
            timeoutSeconds: 1
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 30
          startupProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - |-
                  nc -zv localhost 8080
                  nc -zv localhost 9090
                  nc -zv localhost 9000
                  nc -zv localhost 20880
            initialDelaySeconds: 30
            timeoutSeconds: 1
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 30
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      imagePullSecrets:
        - name: nexus-docker
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
image

Versions used(KubeSphere/Kubernetes)
KubeSphere:
Kubernetes: (If KubeSphere installer used, you can skip this)

Environment
How many nodes and their hardware configuration:

For example:
3 masters: 8cpu/8g
3 nodes: 8cpu/16g

(and other info are welcomed to help us debugging)

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

@fffguo fffguo added the kind/bug Categorizes issue or PR as related to a bug. label May 18, 2023
@VioZhang
Copy link
Member

什么版本?3.3 修复过这个问题

@fffguo
Copy link
Author

fffguo commented Jul 28, 2023

什么版本?3.3 修复过这个问题

KubeSphere 版本 : v3.3.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants