Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vector 0.22.0 breaks with kubernetes_source #12989

Closed
jaysonsantos opened this issue Jun 6, 2022 · 8 comments 路 Fixed by #13038
Closed

Vector 0.22.0 breaks with kubernetes_source #12989

jaysonsantos opened this issue Jun 6, 2022 · 8 comments 路 Fixed by #13038
Labels
source: kubernetes_logs Anything `kubernetes_logs` source related type: bug A code related bug.
Milestone

Comments

@jaysonsantos
Copy link
Contributor

A note for the community

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Problem

vector 0.22.0 breaks when it uses kubernetes_source with:

WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))

kubernetes version: v1.23.6+k3s1

Configuration

the following kustomization.yaml is being used:

namespace: vector
helmCharts:
  - name: vector
    repo: https://helm.vector.dev
    releaseName: ds
    namespace: vector
    version: 0.12.0
    valuesInline:
      role: Agent
      args:
        - --config-dir
        - "/etc/vector/"
        - -vvv

images:
  # Override the Vector image to avoid use of the sliding tag.
  - name: timberio/vector
    newName: timberio/vector
    newTag: 0.22.0-debian

Version

vector 0.22.0 (x86_64-unknown-linux-gnu 5e937e3 2022-06-01)

Debug Output

2022-06-06T08:53:06.015122Z  INFO vector::app: Log level is enabled. level="vector=trace,codec=trace,vrl=trace,file_source=trace,tower_limit=trace,rdkafka=trace,buffers=trace,kube=trace"
2022-06-06T08:53:06.015234Z  INFO vector::app: Loading configs. paths=["/etc/vector"]
2022-06-06T08:53:06.016885Z DEBUG vector::topology::builder: Building new source. component=host_metrics
2022-06-06T08:53:06.017454Z  INFO vector::sources::host_metrics: PROCFS_ROOT is set in envvars. Using custom for procfs. custom="/host/proc"
2022-06-06T08:53:06.017472Z  INFO vector::sources::host_metrics: SYSFS_ROOT is set in envvars. Using custom for sysfs. custom="/host/sys"
2022-06-06T08:53:06.017536Z DEBUG vector::topology::builder: Building new source. component=internal_metrics
2022-06-06T08:53:06.018031Z DEBUG vector::topology::builder: Building new source. component=kubernetes_logs
2022-06-06T08:53:06.018537Z  INFO vector::sources::kubernetes_logs: Obtained Kubernetes Node name to collect logs for (self). self_node_name="proliant-local-1.jayson.com.br"
2022-06-06T08:53:06.018652Z TRACE kube_client::config: no local config found, falling back to local in-cluster config error=failed to read kubeconfig from '"/root/.kube/config"': No such file or directory (os error 2) error.sources=[No such file or directory (os error 2)]
2022-06-06T08:53:06.019034Z  INFO vector::sources::kubernetes_logs: Excluding matching files. exclude_paths=["**/*.gz", "**/*.tmp"]
2022-06-06T08:53:06.019066Z DEBUG vector::topology::builder: Building new sink component=prom_exporter
2022-06-06T08:53:06.019188Z DEBUG vector::topology::builder: Building new sink component=stdout
2022-06-06T08:53:06.019282Z  INFO vector::topology::running: Running healthchecks.
2022-06-06T08:53:06.019298Z DEBUG vector::topology::running: Connecting changed/added component(s).
2022-06-06T08:53:06.019309Z DEBUG vector::topology::running: Configuring outputs for source. component=internal_metrics
2022-06-06T08:53:06.019330Z DEBUG vector::topology::running: Configuring output for component. component=internal_metrics output_id=None
2022-06-06T08:53:06.019349Z DEBUG vector::topology::running: Configuring outputs for source. component=kubernetes_logs
2022-06-06T08:53:06.019362Z DEBUG vector::topology::running: Configuring output for component. component=kubernetes_logs output_id=None
2022-06-06T08:53:06.019371Z DEBUG vector::topology::running: Configuring outputs for source. component=host_metrics
2022-06-06T08:53:06.019379Z DEBUG vector::topology::running: Configuring output for component. component=host_metrics output_id=None
2022-06-06T08:53:06.019397Z DEBUG vector::topology::running: Connecting inputs for sink. component=prom_exporter
2022-06-06T08:53:06.019430Z DEBUG vector::topology::running: Adding component input to fanout. component=prom_exporter fanout_id=host_metrics
2022-06-06T08:53:06.019445Z DEBUG vector::topology::running: Adding component input to fanout. component=prom_exporter fanout_id=internal_metrics
2022-06-06T08:53:06.019467Z DEBUG vector::topology::running: Connecting inputs for sink. component=stdout
2022-06-06T08:53:06.019481Z DEBUG vector::topology::running: Adding component input to fanout. component=stdout fanout_id=kubernetes_logs
2022-06-06T08:53:06.019506Z DEBUG vector::topology::running: Spawning new source. key=internal_metrics
2022-06-06T08:53:06.019551Z DEBUG vector::topology::running: Spawning new source. key=kubernetes_logs
2022-06-06T08:53:06.019582Z DEBUG vector::topology::running: Spawning new source. key=host_metrics
2022-06-06T08:53:06.019609Z TRACE vector::topology::running: Spawning new sink. key=prom_exporter
2022-06-06T08:53:06.019604Z  INFO vector::topology::builder: Healthcheck: Passed.
2022-06-06T08:53:06.019639Z TRACE vector::topology::running: Spawning new sink. key=stdout
2022-06-06T08:53:06.019655Z  INFO vector::topology::builder: Healthcheck: Passed.
2022-06-06T08:53:06.019692Z DEBUG source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector::topology::builder: Source pump starting.
2022-06-06T08:53:06.019725Z  INFO vector: Vector has started. debug="false" version="0.22.0" arch="x86_64" build_id="5e937e3 2022-06-01"
2022-06-06T08:53:06.019741Z DEBUG source{component_kind="source" component_id=kubernetes_logs component_type=kubernetes_logs component_name=kubernetes_logs}: vector::topology::builder: Source pump starting.
2022-06-06T08:53:06.019760Z  INFO vector::internal_events::api: API server running. address=127.0.0.1:8686 playground=off
2022-06-06T08:53:06.020017Z DEBUG source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector::topology::builder: Source pump starting.
2022-06-06T08:53:06.020225Z DEBUG source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector::sources::host_metrics::cgroups: Detected cgroup base directory. base_dir="/host/sys/fs/cgroup/unified"
2022-06-06T08:53:06.020330Z  WARN source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector::sources::host_metrics::cgroups: CGroups memory controller is not active, there will be no memory metrics.
2022-06-06T08:53:06.020476Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.020703Z TRACE vector: Beep.
2022-06-06T08:53:06.020773Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.021841Z DEBUG sink{component_kind="sink" component_id=stdout component_type=console component_name=stdout}: vector::utilization: utilization=0.06145931085216849
2022-06-06T08:53:06.021924Z DEBUG sink{component_kind="sink" component_id=prom_exporter component_type=prometheus_exporter component_name=prom_exporter}: vector::utilization: utilization=0.0631868862304481
2022-06-06T08:53:06.021922Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector::internal_events::common: Bytes received. byte_size=0 protocol=none
2022-06-06T08:53:06.022033Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_common::internal_event::events_received: Events received. count=20 byte_size=12166
2022-06-06T08:53:06.022100Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.022122Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_common::internal_event::events_sent: Events sent. count=20 byte_size=12174 output=_default
2022-06-06T08:53:06.022186Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_core::fanout: Processing control message outside of send: ControlMessage::Add(ComponentKey { id: "prom_exporter" })
2022-06-06T08:53:06.022227Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.022265Z TRACE source{component_kind="source" component_id=internal_metrics component_type=internal_metrics component_name=internal_metrics}: vector_core::fanout: Sent item to fanout.
2022-06-06T08:53:06.022315Z TRACE sink{component_kind="sink" component_id=prom_exporter component_type=prometheus_exporter component_name=prom_exporter}: vector_common::internal_event::events_received: Events received. count=20 byte_size=12174
2022-06-06T08:53:06.022532Z ERROR source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector::sources::host_metrics: Failed to load cgroups CPU statistics. error=Could not open cgroup data file "/host/sys/fs/cgroup/unified/cpu.stat". internal_log_rate_secs=60
2022-06-06T08:53:06.026219Z  INFO source{component_kind="source" component_id=kubernetes_logs component_type=kubernetes_logs component_name=kubernetes_logs}:file_server: file_source::checkpointer: Loaded checkpoint data.
2022-06-06T08:53:06.194510Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.194562Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.194658Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.196502Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.196536Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.196631Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.295820Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_common::internal_event::events_received: Events received. count=2245 byte_size=1377840
2022-06-06T08:53:06.296059Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.296466Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.296536Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.296556Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_common::internal_event::events_sent: Events sent. count=2245 byte_size=1377912 output=_default
2022-06-06T08:53:06.296600Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_core::fanout: Processing control message outside of send: ControlMessage::Add(ComponentKey { id: "prom_exporter" })
2022-06-06T08:53:06.296691Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.296714Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_core::fanout: Sent item to fanout.
2022-06-06T08:53:06.296918Z TRACE sink{component_kind="sink" component_id=prom_exporter component_type=prometheus_exporter component_name=prom_exporter}: vector_common::internal_event::events_received: Events received. count=1000 byte_size=566947
2022-06-06T08:53:06.299371Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.299392Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_core::fanout: Sent item to fanout.
2022-06-06T08:53:06.299572Z TRACE sink{component_kind="sink" component_id=prom_exporter component_type=prometheus_exporter component_name=prom_exporter}: vector_common::internal_event::events_received: Events received. count=1000 byte_size=658149
2022-06-06T08:53:06.302531Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_buffers::topology::channel::limited_queue: Sent item.
2022-06-06T08:53:06.302554Z TRACE source{component_kind="source" component_id=host_metrics component_type=host_metrics component_name=host_metrics}: vector_core::fanout: Sent item to fanout.
2022-06-06T08:53:06.302613Z TRACE sink{component_kind="sink" component_id=prom_exporter component_type=prometheus_exporter component_name=prom_exporter}: vector_common::internal_event::events_received: Events received. count=245 byte_size=152816
2022-06-06T08:53:06.370611Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.370656Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.370734Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.380026Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.380058Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.380108Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.531934Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.531982Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.532064Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.545013Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.545045Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.545096Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.658778Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.658821Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.658891Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.672738Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.672769Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.672819Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.777438Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding
2022-06-06T08:53:06.777477Z  WARN vector::kubernetes::reflector: Watcher Stream received an error. Retrying. error=InitialListFailed(HyperError(hyper::Error(Connect, Custom { kind: Other, error: Custom { kind: InvalidData, error: InvalidCertificateEncoding } })))
2022-06-06T08:53:06.777574Z DEBUG HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/namespaces?&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client"}: kube_client::client::builder: requesting
2022-06-06T08:53:06.790718Z ERROR HTTP{http.method=GET http.url=https://kubernetes.default.svc/api/v1/pods?&fieldSelector=spec.nodeName%3Dproliant-local-1.jayson.com.br&labelSelector=vector.dev%2Fexclude%21%3Dtrue otel.name="list" otel.kind="client" otel.status_code="ERROR"}: kube_client::client::builder: failed with error error trying to connect: invalid peer certificate encoding

Example Data

No response

Additional Context

vanilla helm rendering as agent

References

No response

@jaysonsantos jaysonsantos added the type: bug A code related bug. label Jun 6, 2022
@spencergilbert
Copy link
Contributor

Hi - are you attempting to use a custom certificate with the source?

@jszwedko jszwedko added the source: kubernetes_logs Anything `kubernetes_logs` source related label Jun 6, 2022
@jaysonsantos
Copy link
Contributor Author

hi there @spencergilbert not really, I used the one that the helm chart renders

data_dir: /vector-data-dir
api:
  enabled: true
  address: 127.0.0.1:8686
  playground: false
sources:
  kubernetes_logs:
    type: kubernetes_logs
  host_metrics:
    filesystem:
      devices:
        excludes: [binfmt_misc]
      filesystems:
        excludes: [binfmt_misc]
      mountPoints:
        excludes: ["*/proc/sys/fs/binfmt_misc"]
    type: host_metrics
  internal_metrics:
    type: internal_metrics
sinks:
  prom_exporter:
    type: prometheus_exporter
    inputs: [host_metrics, internal_metrics]
    address: 0.0.0.0:9090
  stdout:
    type: console
    inputs: [kubernetes_logs]
    encoding:
      codec: json

@spencergilbert
Copy link
Contributor

@jaysonsantos - I suspect this may be related to: kube-rs/kube#805 though my understanding was that was limited to rustls while we should be using openssl

You are running on a k3s cluster, correct?

@jaysonsantos
Copy link
Contributor Author

@spencergilbert that is right, it runs on k3s.
Funny thing though, I spanned up a QEMU with Debian and k3s in standalone mode, and it worked fine.
I will try and compile a simple cli that uses kube-rs to see if i can see something else in there

@jaysonsantos
Copy link
Contributor Author

With the following it seems to work to call k8s's api:

cargo new kube-test
cd kube-test

cat >> Cargo.toml <<EOF
futures = "0.3.21"
k8s-openapi = { version = "0.15.0", features = ["v1_23"] }
kube = { version = "0.73.1" }
tokio = { version = "1.19.2", features = ["full"] }
EOF

cat > src/main.rs <<-EOF
use futures::{StreamExt, TryStreamExt};
use kube::{Client, api::{Api, ResourceExt, ListParams, PostParams}};
use k8s_openapi::api::core::v1::Pod;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Infer the runtime environment and try to create a Kubernetes Client
    let client = Client::try_default().await?;

    // Read pods in the configured namespace into the typed interface from k8s-openapi
    let pods: Api<Pod> = Api::default_namespaced(client);
    for p in pods.list(&ListParams::default()).await? {
        println!("found pod {}", p.name());
    }
    Ok(())
}
EOF
cargo build
gzip -c target/debug/kube-test | kubectl exec -it -n vector ds-vector-lcpdk -- bash -exc 'cat > /tmp/kube-test.gz'
kubectl exec -it -n vector ds-vector-lcpdk -- bash -exc 'cd /tmp && gunzip kube-test.gz && chmod +x kube-test && ./kube-test'

and the output for the last one is this


+ cd /tmp
+ gunzip kube-test.gz
+ chmod +x kube-test
+ ./kube-test
found pod ds-vector-drkt6
found pod ds-vector-gkpk9
found pod ds-vector-lcpdk
found pod ds-vector-ljvcp
found pod ds-vector-lx4fk

I will try vector's main branch to see how it goes

@jaysonsantos
Copy link
Contributor Author

@spencergilbert

though my understanding was that was limited to rustls while we should be using openssl

vector is using rust-tls, compiling vector with the following patch works:

diff --git a/Cargo.toml b/Cargo.toml
index f089726b0..48c197c31 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -254,7 +254,7 @@ infer = { version = "0.8.0", default-features = false, optional = true}
 indoc = { version = "1.0.6", default-features = false }
 inventory = { version = "0.1.10", default-features = false }
 k8s-openapi = { version = "0.14.0", default-features = false, features = ["api", "v1_16"], optional = true }
-kube = { version = "0.71.0", default-features = false, features = ["client", "rustls-tls", "runtime"], optional = true }
+kube = { version = "0.71.0", default-features = false, features = ["client", "native-tls", "runtime"], optional = true }
 listenfd = { version = "1.0.0", default-features = false, optional = true }
 logfmt = { version = "0.0.2", default-features = false, optional = true }
 lru = { version = "0.7.6", default-features = false, optional = true }

@spencergilbert
Copy link
Contributor

馃う whoops - that's definitely a mistake, would you like to submit that patch?

@jaysonsantos
Copy link
Contributor Author

@spencergilbert sure, i will send it soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
source: kubernetes_logs Anything `kubernetes_logs` source related type: bug A code related bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants