New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash and log spew on startup about disabling filter chain optimization #138
Comments
Those messages look ok - I suppose you could suppress them by using Not sure what the problem is - a stacktrace or error message would be helpful |
How could i dump a stacktrace? |
We need to see the crash, or an actual problem. What is the actual problem? |
Maybe this is related to an issue i'm having right now. Fluentd log says:
I'm trying to use a filter for parsing structured logging in combination with this kubernetes plugin. Part of my config:
Ultimately, fluentd keeps crashing until i remove my parser filter. |
What version of fluent-plugin-kubernetes_metadata_filter are you using? What version of fluentd? It looks like you want to do the JSON parsing of the
The fluent-plugin-kubernetes_metadata_filter has some logic in it so that it will only attempt to parse the |
Thx for the quick answer. This is happening with the fluent/fluentd-kubernetes-daemonset:v1.3.0-debian-cloudwatch image, which uses fluent-plugin-kubernetes_metadata_filter version 2.1.4 From the error messages there seems to be some kind of recursion going on, like the same log entries being parsed over and over again. I still have no idea why this is happening. Chaining of filters shouldn't be a problem from what i've read about fluentd. AFAIK the kubernetes_metadata_filter only adds some properties under "kubernetes"!? Why is it messing with the log message? Using kubernetes_metadata with structured log parsing should be a regular use case. |
The fluent-plugin-kubernetes_metadata_filter should not be messing with the log messages. We removed that feature: https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter#configuration And - I don't see any evidence that this is happening, so that's good. I don't think this error is really a problem:
I don't see that from what you have posted, so I'm not sure what you mean. My question is - what happens in the |
I'm not sure how to test that when the services i want to monitor produce json logs. |
This is a JSON log. However:
This means - parse the field
The value of the |
Ok i see the point. The services i'm interested in do have structured logging, but when my filter is applied to everything running inside the Kubernetes cluster there is going to be errors from any human-readable log files. The proper approach would probably be to narrow down the filter pattern from this |
ok - I'm really not sure what's going on - for debugging, what I usually do is to put code like this to intercept what the record looks like at various points in the pipeline:
then I can see what the record looks like before and after the filter I'm interested in. So, in your case, put a stdout filter before the kubernetes metadata filter, then after, and see if the filter is altering the record in a bad way |
Thanks, i will try that one out. |
As a follow-up, the problem i encountered didn't have anything to do with this plugin. My problem was that the fluentd-related events didn't get discarded into @type null as expected, that's why the log recursion happened which crashed the fluentd service. |
+1 having the same issue |
same here |
Closing issue per previous comment. Disabled filter chain should be fixed by #263 |
I've been seeing this log when using the papertrail docker as defined by
fluent-plugin-papertrail
.I used to see a crash at some point but I haven't seen that happen recently. I do see these logs being thrown on startup of fluentd. Could they be ignored?
The configuration as part of the papertrail package is:
The text was updated successfully, but these errors were encountered: