Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s Service does not seem to be accessible on UDP port when Instana is configured with Statsd enabled #115

Open
talitz opened this issue Oct 3, 2023 · 2 comments

Comments

@talitz
Copy link

talitz commented Oct 3, 2023

Hi guys,

Issue Description: I installed Instana's agent in my k8s cluster with the operator (EKS, v1.23.17-eks-2d98532).
The CRD for InstanaAgent is as follows:

apiVersion: instana.io/v1
kind: InstanaAgent
metadata:
  name: instana-agent
  namespace: instana-agent
spec:
  zone:
    name: solutions
  cluster:
      name: solutions
  agent:
    key: <MY_KEY>
    endpointHost: ingress-green-saas.instana.io
    endpointPort: "443"
    env: {}
    configuration_yaml: |
      com.instana.plugin.statsd:
        enabled: true
        ports:
          udp: 8125
          mgmt: 8126
        bind-ip: "0.0.0.0" # all IPs by default
        flush-interval: 10 # in seconds

With this setup, a service is created by the operator that expose the instana-agent pods:

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: instana-agent
    meta.helm.sh/release-namespace: instana-agent
  creationTimestamp: "2023-10-02T12:05:59Z"
  labels:
    app.kubernetes.io/instance: instana-agent
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: instana-agent
    app.kubernetes.io/version: 1.2.63
    helm.sh/chart: instana-agent-1.2.63
  name: instana-agent
  namespace: instana-agent
  ownerReferences:
  - apiVersion: instana.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: InstanaAgent
    name: instana-agent
    uid: 318e7d05-e800-4f1b-8722-3ef830bb4372
  resourceVersion: "178751591"
  uid: 582f9998-6236-4d1c-b855-1eb8907c379b
spec:
  clusterIP: 172.20.98.212
  clusterIPs:
  - 172.20.98.212
  internalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: agent-apis
    port: 42699
    protocol: TCP
    targetPort: 42699
  - name: opentelemetry
    port: 55680
    protocol: TCP
    targetPort: 55680
  - name: opentelemetry-iana
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: opentelemetry-http
    port: 4318
    protocol: TCP
    targetPort: 4318
  selector:
    app.kubernetes.io/instance: instana-agent
    app.kubernetes.io/name: instana-agent
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

But for some reason i'm not able to send any metric via this service to instana:

echo "TestForLightrun:3|c" | nc -u -w1 instana-agent.instana-agent.svc.cluster.local 8125
After looking at the service it doesn't even seem to expose the pods under the UDP port 8125. Is it a bug? Did I miss some configuration?

Workaround: Communicating directly with the IPs that are allocated to the pod, in that case it works and metrics are being sent to Instana through Statsd:

echo "YourMetricValue:3|c" | nc -u -w1 10.50.21.132 8125

Screenshot 2023-10-03 at 13 48 06

Another question - what if I need to send UDP requests from a cluster outside of where the instana-agent is installed? Do I have a way to deploy an ingress object or I have to do it separately on my own?

@zynpsnltrkk
Copy link

having the same issue in my rosa cluster ( OpenShift:4.12.45 ) while sending the custom metrics from my application to Instana Agent Operator (2.0.17)

# nc -u -w1 instana-agent.instana-agent.svc.cluster.local 8125 -vvv
Connection to instana-agent.instana-agent.svc.cluster.local 8125 port [udp/*] succeeded!

It seems the connection established but there are no logs in the Instana pods and no custom metrics appears in the ui

@zach-robinson
Copy link
Contributor

Will look into this. For now, at least, I would recommend using the Downward API in Kubernetes to acquire the agent's IP within your workload pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants