-
Notifications
You must be signed in to change notification settings - Fork 177
Added parameter to set await termination timeout #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I tried to address the issue with the queued messages in the |
46a0db8
to
427bca6
Compare
Hello, @snorwin . Recently I needed to solve the same problem which you're addressing with your PR. I'm using this library within Lambda function. The problem is Lambda finishes its work and doesn't wait for Splunk appender to send last batch to the Splunk server. I tried your last commit, but unfortunately it doesn't log all messages every time. Can't understand why though. I tried it with different amount of load and usually with 300 Lambda requests it looks good, but with 3000 requests (there is a queue that feeds the Lambda) there are some log messages missing. What helped me, but significantly spoiled the performance, is using sync http request for the last flush ( |
@guitarosexual is it possible to share your example as runnable code? I would be very happy to look at it and debug it. |
@snorwin sure. The only thing is I removed everything related to Logback appender, since I don't need it. |
Thanks @guitarosexual, can you also share the example code with the lambdas in order to reproduce it? |
Sorry @snorwin , I think I'm not allowed to send the actual Lambda code. I can send just log4j2.xml settings. |
@snorwin I created sanitized version of Lambda function, which can be used for debugging |
@guitarosexual I tried to reproduce it but it was not able to do so. TBH I'm not an expert on AWS lambda functions either. But I still think that this PR facilitates a termination without loosing log messages in general (i.e. except AWS lambdas). Maybe @zenmoto or @shakeelmohamed, can also take a look at it. |
Thank you, @snorwin |
I encountered the same issue described in #152 while running a batch application that terminates on completion. Before finding this PR, I made very similar changes with a similar result (i.e. resolved the issue). Like @guitarosexual mentioned, I was worried about the final http request to Splunk, so my changes also altered the behavior to ensure the last http request was synchronous rather than asynchronous. That said, I did not rigorously test whether omitting this additional change made an observable impact on log visibility. |
@snorwin We had the same issue for a short-lived batch job running in a kubernetes pod. Verified the patch on our end and it solves the issue. Would love to see this merged! |
@fantavlik @ashah-splunk is there a chance that you could have a look at this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the contribution @snorwin - this looks good to me.
Partial solves #152 and I think it is worth to add this option.