Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to only read/write new logs from a specific pod using ReadNamespacedPodLogWithHttpMessagesAsync #1491

Open
snehapar9 opened this issue Dec 27, 2023 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@snehapar9
Copy link

snehapar9 commented Dec 27, 2023

Describe the bug
A clear and concise description of what the bug is.

ReadNamespacedPodLogWithHttpMessagesAsync reads all the logs (including historical ones) from a specific pod. I cannot find a good way to only read the recent logs that have not been read/written before in the previous iterations.
I tried the following but none of them actually fix this.
1.Watch mechanism is not supported by ReadNamespacedPodLogWithHttpMessagesAsync so we are not able to detect any new logs using watch and only write them. Also added a watch mechanism to write only if there was an event detected, but this would work only if a pod was added/deleted/modified/errored in the namespace, any change in the log would not trigger an event, so this is not a viable solution.
2.Seek is not supported, so we cannot move the pointer to the end of the old logs and just write the new ones.
3.Experimented with the tailLines argument but it's not clear how to set the number of lines in this argument without doing extra read operations. Does not seem like a good solution to me.

Kubernetes C# SDK Client Version
e.g. 9.0.1

Server Kubernetes Version
e.g. 1.22.3

Dotnet Runtime Version
e.g. net6

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

Capability to only read the recent logs that have not already been read from the pod.

Example - the following code reads all the logs (including historical ones) every time. The expectation is to read only the logs that have not been read in the previous iterations without having to hard code tailLines argument because this can risk losing some logs.

 while (!context.CancellationToken.IsCancellationRequested)
 {
     var stream = await _kubernetesClient.CoreV1.ReadNamespacedPodLogWithHttpMessagesAsync(pod.Metadata.Name, _kubernetesConfig.Namespace, container: container.Name);
     var reader = new StreamReader(stream.Body);
     while (!reader.EndOfStream)
     {
          //Write to console
     }

     await Task.Delay(1000);
 }

KubeConfig
If applicable, add a KubeConfig file with secrets redacted.

Where do you run your app with Kubernetes SDK (please complete the following information):

  • OS: [e.g. Linux]
  • Environment [e.g. container]
  • Cloud [e.g. Azure]

Additional context
Add any other context about the problem here.

@tg123
Copy link
Member

tg123 commented Dec 31, 2023

check out example of https://github.com/kubernetes-client/csharp/blob/master/examples/logs/Logs.cs
it is kubectl log -f

you will get the steam of log output, thus, everything from the stream is new

@snehapar9
Copy link
Author

snehapar9 commented Jan 11, 2024

Thanks @tg123! I tried giving this a shot by setting the follow flag to true but this does not seem to stream logs if I have it running inside a while loop till the cancellation token is requested. Are there any other alternatives?

Edit : I'm not really receiving any logs if I set follow to true. Am I missing something?
image

@tg123
Copy link
Member

tg123 commented Jan 14, 2024

here is how to test the example

(make sure there is one pod only)

kubectl run ping --image=busybox --command=true ping localhost

image

@snehapar9
Copy link
Author

Thank you @tg123! I was able to get this to work.

However, seems like the cancellation token is not respected? I have a scenario where I'm constantly writing to the pod and simultaneously reading from it, the reader continues to read logs till the end of stream if after requesting a cancellation token.
I have already tried passing the cancellation token to the ReadLineAsync method but that does not seem to do the trick.

@tg123
Copy link
Member

tg123 commented Jan 17, 2024

linking ct to stream readline should work
did you try net8
some .net api does not honor ct or even does not take ct as parameter

@snehapar9
Copy link
Author

snehapar9 commented Jan 17, 2024

  using (var stream = await _kubernetesClient.CoreV1.ReadNamespacedPodLogAsync(pod.Metadata.Name, _kubernetesConfig.Namespace, container: container.Name, follow: true))
            {
                using (var reader = new StreamReader(stream))
                {
                    while (!reader.EndOfStream)
                    {
                        var logLine = await reader.ReadLineAsync();
                        // Write logLine to response stream
                    }
                }
            }

@tg123 Thanks! It looks like the stream with the pod logs is returned almost instantly but it has been very inefficient to read the pod logs line by line. I've experimented with ReadAsync as well but does not improve much. Is there a more efficient way to read?

I need to read about 2000 lines in a few seconds but takes almost 10 minutes currently.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants