Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl completion, config current-context and version commands are too slow #1546

Open
yorik opened this issue Jan 18, 2024 · 20 comments
Open
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@yorik
Copy link

yorik commented Jan 18, 2024

What happened?

Commands kubectl completion zsh, kubectl config current-context, kubectl version and many others are too slow because they are trying to connect to the server.
This is very annoying behaviour which causes slow opening of new terminal (~0.5s with VPN and ~5s without VPN, when connection isn't possible).

I don't see any reason for those commands to connect to any server, they should just return output locally.

What did you expect to happen?

All the commands run ~0.05s, like they do without any network connection:

$ time kubectl completion zsh > /dev/null                                                                                                                                                                                                          
kubectl completion zsh > /dev/null  0.05s user 0.01s system 126% cpu 0.044 total

How can we reproduce it (as minimally and precisely as possible)?

Without VPN:

$ time kubectl version                                                                                                                                                                                                                             
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Unable to connect to the server: dial tcp 10.xxx.xxx.xxx:443: i/o timeout
kubectl version  0.05s user 0.03s system 0% cpu 35.058 total

$ time kubectl completion zsh > /dev/null                                                                                                                                                                                                          
kubectl completion zsh > /dev/null  0.05s user 0.05s system 1% cpu 5.075 total

$ time kubectl config current-context                                                                                                                                                                                                              
XXXXXXXXX
kubectl config current-context  0.07s user 0.03s system 1% cpu 5.078 total

With VPN connected (still annoyingly slow):

$ time kubectl version                                                                                                                                                                                                                             
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.8-gke.1067000", GitCommit:"3936242da351c64ea89912f00faa0b28fb7eab76", GitTreeState:"clean", BuildDate:"2023-12-06T20:41:34Z", GoVersion:"go1.20.11 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}
kubectl version  0.07s user 0.03s system 9% cpu 1.087 total

$ time kubectl completion zsh > /dev/null                                                                                                                                                                                                          
kubectl completion zsh > /dev/null  0.04s user 0.03s system 16% cpu 0.414 total

$ time kubectl config current-context                                                                                                                                                                                                              
XXXXXXXXX
kubectl config current-context  0.06s user 0.03s system 20% cpu 0.444 total

Anything else we need to know?

There was previous issue kubernetes/kubernetes#82883, which was closed without resolution, also please do not mark this as feature: it's a bug because kubecli can and should run the commands for ~0.05s but run for 0.5s or even 5s.

Kubernetes version

$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.8-gke.1067000", GitCommit:"3936242da351c64ea89912f00faa0b28fb7eab76", GitTreeState:"clean", BuildDate:"2023-12-06T20:41:34Z", GoVersion:"go1.20.11 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

GCP

OS version

# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux trixie/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=trixie
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

$ uname -a
Linux xxxxx 6.6.9-amd64 kubernetes/kubernetes#1 SMP PREEMPT_DYNAMIC Debian 6.6.9-1 (2024-01-01) x86_64 GNU/Linux

Install tools

$ apt policy kubectl                                                                                                                                             
kubectl:
  Installed: 1:460.0.0-0
  Candidate: 1:460.0.0-0
  Version table:
 *** 1:460.0.0-0 500
        500 https://packages.cloud.google.com/apt cloud-sdk/main amd64 Packages
        100 /var/lib/dpkg/status

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@yorik yorik added the kind/bug Categorizes issue or PR as related to a bug. label Jan 18, 2024
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jan 18, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 18, 2024
@yorik
Copy link
Author

yorik commented Jan 18, 2024

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 18, 2024
@HirazawaUi
Copy link
Contributor

/transfer kubectl

@k8s-ci-robot k8s-ci-robot transferred this issue from kubernetes/kubernetes Jan 19, 2024
@ardaguclu
Copy link
Member

ardaguclu commented Jan 19, 2024

I'm not sure there is a completion functionality in kubectl version. For example, as you can see in kubectl annotate command, this defines the completion https://github.com/kubernetes/kubernetes/blob/eb1ae05cf040346bdb197490ef74ed929fdf60b7/staging/src/k8s.io/kubectl/pkg/cmd/annotate/annotate.go#L158. On the other hand, there isn't any for https://github.com/kubernetes/kubernetes/blob/eb1ae05cf040346bdb197490ef74ed929fdf60b7/staging/src/k8s.io/kubectl/pkg/cmd/version/version.go#L75-L91.

Have you run these commands with -v=9 to see for what reason kubectl is communicating with api server?.

@yorik
Copy link
Author

yorik commented Jan 19, 2024

Unfortunately only kubectl version respects -v=9. Here is output of all the commands:

$ time kubectl version -v=9                                                                                                                                                                                                                        
I0119 14:18:55.658263 3484916 loader.go:373] Config loaded from file:  /home/yorik/.kube/config
I0119 14:18:55.658919 3484916 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.27.9 (linux/amd64) kubernetes/8b508a3" 'https://10.xxxxxxxx/version?timeout=32s'
I0119 14:19:25.660146 3484916 round_trippers.go:508] HTTP Trace: Dial to tcp:10.xxxxxxxx:443 failed: dial tcp 10.xxxxxxxx:443: i/o timeout
I0119 14:19:25.660682 3484916 round_trippers.go:553] GET https://10.xxxxxxxx/version?timeout=32s  in 30001 milliseconds
I0119 14:19:25.660736 3484916 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 30001 ms TLSHandshake 0 ms Duration 30001 ms
I0119 14:19:25.660766 3484916 round_trippers.go:577] Response Headers:
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
I0119 14:19:25.661042 3484916 helpers.go:264] Connection error: Get https://10.xxxxxxxx/version?timeout=32s: dial tcp 10.xxxxxxxx:443: i/o timeout
Unable to connect to the server: dial tcp 10.xxxxxxxx:443: i/o timeout
kubectl version -v=9  0.49s user 0.09s system 1% cpu 35.075 total


$ time kubectl completion zsh -v=9                                                                                                                                                                                                                 [removed completion code]
kubectl completion zsh -v=9  0.05s user 0.03s system 1% cpu 5.068 total

$ time kubectl config current-context  -v=9                                                                                                                                                                                                        
I0119 14:20:30.727520 3486678 loader.go:373] Config loaded from file:  /home/yorik/.kube/config
XXXXXXXXXXXXXXX
kubectl config current-context -v=9  0.05s user 0.02s system 1% cpu 5.053 total

@ardaguclu
Copy link
Member

You can use --client flag in kubectl version to skip the server's version retrieval.

@ardaguclu
Copy link
Member

I don't think that kubectl completion has any relation with sending any requests to API server.

@ardaguclu
Copy link
Member

I think, there is some confusion. This issue is about kubernetes/kubernetes#82883 completion functionality works slowly. Because completion relies on retrieval data from server and if the connection is slow, completion does not work performant.

On the other hand, the commands in this issue are not related to the completion;

  • kubectl version --client is suggested
  • kubectl completion just generates a completion script and no relation with server
  • kubectl config current-context, I haven't checked that it sends request to server but even if it requires some data from server, that would work slowly as expected if the connection is slow.

I don't see any bug or issue and I'd prefer closing this as not a bug. Thanks for spending time and effort on this.

@yorik
Copy link
Author

yorik commented Jan 19, 2024

--client doesn't help, it's still very slow:

$ time kubectl version --client -v=9                                                                                                                                                                                                               [14:47:22]
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
kubectl version --client -v=9  0.11s user 0.02s system 2% cpu 5.109 total

kubectl completion does try to connect to current context API server:

$ strace -e trace=open,openat,close,read,write,connect,accept -ff kubectl completion zsh
openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3
read(3, "2097152\n", 20)                = 8
close(3)                                = 0
strace: Process 3519957 attached
strace: Process 3519958 attached
strace: Process 3519959 attached
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519960 attached
[pid 3519956] openat(AT_FDCWD, "/usr/lib/google-cloud-sdk/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3
[pid 3519956] close(3)                  = 0
strace: Process 3519961 attached
strace: Process 3519962 attached
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519962] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519963 attached
strace: Process 3519964 attached
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519965 attached
[pid 3519961] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519966 attached
[pid 3519960] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519964] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519963] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519959] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519961] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519966] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519964] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519967 attached
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519961] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] openat(AT_FDCWD, "/home/yorik/.kube/config", O_RDONLY|O_CLOEXEC) = 3
[pid 3519956] read(3, "apiVersion: v1\nclusters:\n- clust"..., 36080) = 36079
[pid 3519956] read(3, "", 1)            = 0
[pid 3519956] close(3)                  = 0
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519966] write(7, "\0", 1strace: Process 3519968 attached
)         = 1
[pid 3519963] read(6, "\0", 16)         = 1
[pid 3519966] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519963] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519956] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519960] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519966] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519966] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
[pid 3519966] --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=3519956, si_uid=1000} ---
strace: Process 3519969 attached
[pid 3519966] close(10)                 = 0
[pid 3519966] read(9, "", 8)            = 0
[pid 3519966] close(9)                  = 0
[pid 3519966] close(8)                  = 0
[pid 3519956] read(3, 0xc0005ce400, 512) = -1 EAGAIN (Resource temporarily unavailable)
[pid 3519969] openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3
[pid 3519969] read(3, "2097152\n", 20)  = 8
[pid 3519969] close(3)                  = 0
strace: Process 3519970 attached
strace: Process 3519971 attached
strace: Process 3519972 attached
strace: Process 3519973 attached
[pid 3519969] openat(AT_FDCWD, "/home/yorik/.kube/gke_gcloud_auth_plugin_cache", O_RDONLY|O_CLOEXEC) = 3
[pid 3519969] read(3, "{\n    \"current_context\": \"XXXXXX"..., 512) = 360
[pid 3519969] read(3, "", 152)          = 0
[pid 3519969] close(3)                  = 0
[pid 3519969] openat(AT_FDCWD, "/home/yorik/.kube/config", O_RDONLY|O_CLOEXEC) = 3
[pid 3519969] read(3, "apiVersion: v1\nclusters:\n- clust"..., 36080) = 36079
[pid 3519969] read(3, "", 1)            = 0
[pid 3519969] close(3)                  = 0
[pid 3519969] write(1, "{\n    \"kind\": \"ExecCredential\",\n"..., 464) = 464
[pid 3519961] read(3, "{\n    \"kind\": \"ExecCredential\",\n"..., 512) = 464
[pid 3519972] +++ exited with 0 +++
[pid 3519971] +++ exited with 0 +++
[pid 3519970] +++ exited with 0 +++
[pid 3519961] read(3, 0xc000bdc1d0, 560) = -1 EAGAIN (Resource temporarily unavailable)
[pid 3519973] +++ exited with 0 +++
[pid 3519969] +++ exited with 0 +++
[pid 3519966] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=3519969, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
[pid 3519961] read(3, "", 560)          = 0
[pid 3519961] close(3)                  = 0
[pid 3519961] connect(3, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("10.xxxxxxxx")}, 16) = -1 EINPROGRESS (Operation now in progress)
[pid 3519956] close(3)                  = 0
[pid 3519966] write(7, "\0", 1)         = 1
[pid 3519956] read(6, "\0", 16)         = 1
[pid 3519966] write(1, "#compdef kubectl\ncompdef _kubect"..., 42#compdef kubectl
compdef _kubectl kubectl
) = 42

@OmriSteiner
Copy link

OmriSteiner commented Jan 19, 2024

@yorik
I was experiencing this exact issue with the kubectl version provided by gcloud cli. And this was something relatively new.
In fact when using that version, when the current context was an AWS EKS cluster, even when running kubelet --help, it would try to connect using AWS SSO.

I compiled kubelet from source, on master, and the issue did not replicate.
I think this is something that was solved.

Can you try compiling from source or downgrading?
You can also try running something like:

KUBECONFIG=/dev/null kubelet version

P.S I agree that this has nothing to do with kubernetes/kubernetes#82883

@yorik
Copy link
Author

yorik commented Jan 19, 2024

To be honest, I don't care about speed of kubectl version, but there is no reason for kubectl completion and kubectl config current-context to do server request. I can't say that my connection to the server is slow (ping ~100ms), but it's still annoying.
Also even kubectl foo tries to connect to API server before returning usage error, which should never happen.

I've also found this in strace:

[pid 3529894] newfstatat(AT_FDCWD, "/usr/sbin/gke-gcloud-auth-plugin", 0xc0000aa928, 0) = -1 ENOENT (No such file or directory)
[pid 3529894] newfstatat(AT_FDCWD, "/usr/local/bin/gke-gcloud-auth-plugin", 0xc0000aaac8, 0) = -1 ENOENT (No such file or directory)
[pid 3529894] newfstatat(AT_FDCWD, "/usr/bin/gke-gcloud-auth-plugin", {st_mode=S_IFREG|0755, st_size=11243480, ...}, 0) = 0
[pid 3529894] faccessat2(AT_FDCWD, "/usr/bin/gke-gcloud-auth-plugin", X_OK, AT_EACCESS) = 0

So maybe it's /usr/bin/gke-gcloud-auth-plugin fault.

@ardaguclu
Copy link
Member

I saw in your logs about gke_gcloud_auth_plugin_cache. you may use credentials exec plugin in your kubeconfig and that plugin might be the responsible that sending a request to server or gke_gcloud_auth_plugin_cache plugin

@ardaguclu
Copy link
Member

So maybe it's /usr/bin/gke-gcloud-auth-plugin fault.

I'd not consider as a fault but perhaps this plugin sends requests for authentication, etc.

@yorik
Copy link
Author

yorik commented Jan 19, 2024

Running without the config helps:

$ time KUBECONFIG=/dev/null kubectl version                                                                                                                                                                                                        
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
The connection to the server localhost:8080 was refused - did you specify the right host or port?
KUBECONFIG=/dev/null kubectl version  0.03s user 0.02s system 128% cpu 0.042 total

I have something like this in my config:

users:
- name: gke_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  user: 
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
      interactiveMode: IfAvailable
      provideClusterInfo: true

But my main point that for commands which can be processed locally kubectl should not open any connections.
I've run gke-gcloud-auth-plugin separately and it doesn't connect to API server itself, but makes kubectl to connect.

I can "fix" kubectl completion zsh with KUBECONFIG=/dev/null, but kubectl config current-context (which is called from kubectx) is still annoyingly and without out a reason too slow.

Just in case:

$ gke-gcloud-auth-plugin --version                                                                                                                                                                                                                 Kubernetes v1.28.2-alpha+58ec6ae34b7dcd9699b37986ccb12b3bbac88f00

@yorik
Copy link
Author

yorik commented Jan 19, 2024

I've tried to downgrade to kubectl package version 1:459.0.0-0, but it has the same version of the binary:

Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.9-dispatcher", GitCommit:"8b508a33aafcd3ba51641b6b2ef203adbdd9de1e", GitTreeState:"clean", BuildDate:"2023-12-21T23:22:51Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}

I'll try to build kubectl from the repo and test it during weekend.

@ardaguclu
Copy link
Member

If kubectl completion or kubectl version --client or any other command that has a --local flag whose are designed not to send request to api server sends a request to api server when kubeconfig has a credentials exec plugin, I think that is a bug.

@yorik
Copy link
Author

yorik commented Jan 19, 2024

I've tested it with kubectl built from https://github.com/kubernetes/kubernetes at 2d4100335e4c4ccc28f96fac78153f378212da4c and I wasn't able to reproduce the problem.

Also I wasn't able to find git commit 8b508a33aafcd3ba51641b6b2ef203adbdd9de1e in the repo from kubectl version installed from https://packages.cloud.google.com/apt, so maybe there are some google's patches added.

@ByteFlyCoding
Copy link

ByteFlyCoding commented Jan 30, 2024

I found that it's normal when I use kubectl@v1.23.5, but it will be very slow when I switch kubectl into v1.29.1.
#1552 (comment)

xxx@xxx-MBP ~ % kubectl version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.1
xxx@xxx-MBP ~ % time kubectl version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.1
kubectl version  0.04s user 0.02s system 1% cpu 5.083 total
xxx@xxx-MBP ~ % time kubectl get pod
No resources found in default namespace.
kubectl get pod  0.06s user 0.02s system 1% cpu 5.092 total
xxx@xxx-MBP ~ % time kubectl get node
NAME             STATUS   ROLES           AGE    VERSION
docker-desktop   Ready    control-plane   167m   v1.29.1
kubectl get node  0.06s user 0.02s system 1% cpu 5.087 total
xxx@xxx-MBP ~ %
xxx@xxx-MBP ~ % 
xxx@xxx-MBP ~ % 
xxx@xxx-MBP ~ % cd go/bin
xxx@xxx-MBP bin % time ./kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.1", GitCommit:"bc401b91f2782410b3fb3f9acf43a995c4de90d2", GitTreeState:"clean", BuildDate:"2024-01-17T15:41:12Z", GoVersion:"go1.21.6", Compiler:"gc", Platform:"linux/arm64"}
WARNING: version difference between client (1.23) and server (1.29) exceeds the supported minor version skew of +/-1
./kubectl version  0.04s user 0.02s system 73% cpu 0.081 total
xxx@xxx-MBP bin % time ./kubectl get pod
No resources found in default namespace.
./kubectl get pod  0.07s user 0.06s system 34% cpu 0.353 total
xxx@xxx-MBP bin % time ./kubectl get node
NAME             STATUS   ROLES           AGE    VERSION
docker-desktop   Ready    control-plane   170m   v1.29.1
./kubectl get node  0.05s user 0.02s system 84% cpu 0.078 total
xxx@xxx-MBP bin % 

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Status: Needs Triage
Development

No branches or pull requests

8 participants