Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not consistent in getting pod information. Sometimes even namespaces are not listed #1317

Open
Ananth-vr opened this issue Mar 25, 2024 · 3 comments
Labels
kind/bug report bug issue

Comments

@Ananth-vr
Copy link

What happened?

kepler is not exporting all the namespaces, sometimes missing the entire namespace and its workloads. below example from the "default" namespace.

for "kube-system" namespace
kubectl exec -ti -n kepler kepler-j524k -- curl 127.0.0.1:9102/metrics|grep -i container_namespace=|grep -i kube-system|wc -l
300

for "default" namespace
kubectl exec -ti -n kepler kepler-j524k -- curl 127.0.0.1:9102/metrics|grep -i container_namespace=|grep -i default|wc -l
0

kubectl get po -n default| wc -l
6

kubectl logs -n kepler kepler-j524k|grep -i version
I0322 14:46:50.259132 1 exporter.go:155] Kepler running on version: 1.20.10
I0322 14:46:50.259200 1 config.go:280] kernel version: 6.1

deployed using helm and based on release-0.7.8

if I delete and recreate entire stack:kepler/prometheus/Grafana then it starts appearing but missing some of the pods again.

Kernel: 6.1
Server: Baremetal

after re-creating kepler and grafana,most pods are "not" showing up in the kepler svc endpoint but in the local worker node hosting the pod

kubectl get po -n default
NAME READY STATUS RESTARTS AGE
nginx-deployment-8d545c96d-dcqcg 1/1 Running 1 (4h42m ago) 14h
prox-deployment-74cffd9587-dxt7v 2/2 Running 0 14h
prox-pod-1 1/1 Running 0 83m
prox-pod-2 1/1 Running 0 83m
prox-pod-3 1/1 Running 0 83m
prox-pod-4 1/1 Running 0 83m
prox-pod-5 2/2 Running 0 83m

pod kepler-49zz5 and nginx-deployment-8d545c96d-dcqcg are on same worker

kubectl exec -ti -n kepler kepler-49zz5 -- curl 127.0.0.1:9102/metrics|grep "nginx-deployment-8d545c96d-dcqcg"|wc -l
25

but not in its service endpoint.
kubectl get svc -n kepler
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kepler ClusterIP 10.19.20.156 9102/TCP 15m

curl 10.19.20.156:9102/metrics|grep "nginx-deployment-8d545c96d-dcqcg"|wc -l
0

is it possible that idle pods and its namespaces are removed from the metrics exporter?
Can you suggest if there is an option to list all the namespaces and its pods

What did you expect to happen?

dashboard should show all pods and its namespaces and not just some of the pods and its namespaces

How can we reproduce it (as minimally and precisely as possible)?

helm install kepler kepler/kepler --namespace kepler --create-namespace
and verify that all namespaces and pods are visible in Grafana.

Anything else we need to know?

No response

Kepler image tag

quay.io/sustainable_computing_io/kepler:release-0.7.8

Kubernetes version

$ kubectl version
v1.23.4

Cloud provider or bare metal

Baremetal

OS version

OS:
Ubuntu 22.04.2
Kernel: 6.1.8-060108-generic

Install tools

Kepler deployment config

For on kubernetes:

$ KEPLER_NAMESPACE=kepler

# provide kepler configmap
$ kubectl get configmap kepler-cfm -n ${KEPLER_NAMESPACE} 
# paste output here

# provide kepler deployment description
$ kubectl describe deployment kepler-exporter -n ${KEPLER_NAMESPACE} 

For standalone:

put your Kepler command argument here

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@Ananth-vr Ananth-vr added the kind/bug report bug issue label Mar 25, 2024
@Ananth-vr Ananth-vr changed the title Not consistent with getting pod information from namespaces. Sometimes even namespaces are not listed Not consistent with getting pod information. Sometimes even namespaces are not listed Mar 25, 2024
@Ananth-vr Ananth-vr changed the title Not consistent with getting pod information. Sometimes even namespaces are not listed Not consistent in getting pod information. Sometimes even namespaces are not listed Mar 25, 2024
@rootfs
Copy link
Contributor

rootfs commented Apr 2, 2024

Kepler only reports those pods that have activities. If the pods are not actively running, there are no activities there at certain time, then no metrics are reported at that collection window.

For best observation, please see if prometheus has tracked all the metrics.

@Ananth-vr
Copy link
Author

Thank you @rootfs for your valuable comment.
Let me share some more information about the pod.

I have a pod that's running "dd if=/dev/zero of=/dev/null" and consumes a full CPU core.

from the worker node hosting the pod:

ps aux|grep "dd if=/dev/zero"
root 22622 91.9 0.0 2532 912 ? Rs 06:28 35:58 dd if=/dev/zero of=/dev/null

however, on Grafana I don't see the namespace "default" which is hosting the below pod.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-dd
spec:
  containers:
  - name: nginx
    image: nginx
    command: ["dd", "if=/dev/zero", "of=/dev/null"]
    resources:
      requests:
        cpu: "1"
        memory: "1Gi"
      limits:
        cpu: "1"
        memory: "1Gi"

attached log of
curl 10.105.32.108:9102/metrics|grep -i 'pod_name="nginx-dd"' which unfortunately shows "0" for all the metrics
kepler-metrics-nginx-dd.txt

eg from the log:

kepler_container_bpf_cpu_time_ms_total{container_id="3665b0d57e296481235216eb772276d9a57c1b24b29e54691d637366f4a03942",container_name="nginx",container_namespace="default",pod_name="nginx-dd",source="bpf"} 0

I can share Grafana dashboard as well,but since the API is not able to retrieve the metrics ,I suppose it wouldn't be of much help now.

@jichenjc
Copy link
Collaborator

seems all data are 0 in the pod metrics file you pasted, we used to have those kind of issue time to time ,sometimes due to incorrect configuration , but not sure yours apply here

maybe post the kepler logs (with --v5 ) after give a restart of the pod might provide some insight

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug report bug issue
Projects
None yet
Development

No branches or pull requests

3 participants