You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fluentd tail plugin was outputting If you keep getting this message, please restart Fluentd. After coming across #3614, we implemented the workaround suggested there.
changed follow_inodes to true
set rotate_wait to 0
Since than we are not seeing the original If you keep getting this message, please restart Fluentd but still seeing lots of Skip update_watcher because watcher has been already updated by other inotify event.
This is paired with a pattern of memory leaking and gradual increase in CPU usage until a restart occurs.
To mitigate this I added pos_file_compaction_interval 20m as suggested here but this had no affect on the resource usage.
The suspicion is that some Watchers are not handled properly thus leaking and increasing CPU/Memory consumption until the next restart.
To Reproduce
Deploy fluentd (version v1.16.3-debian-forward-1.0) as a daemonset in a dynamic kubernetes cluster. Cluster is consisting of 50-100 nodes. This is the fluentd config:
Fluentd tail plugin was outputting If you keep getting this message, please restart Fluentd. After coming across #3614, we implemented the workaround suggested there.
changed follow_inodes to true
set rotate_wait to 0
So, follow_inodes false has a similar issue.
Could you please report an issue of follow_inodes false in a new issue?
@uristernik
Wasn't there a problem with follow_inodes false as well?
I'd like to sort out each of follow_inodes false problem and follow_inodes true problem.
I'd like to know if there is any difference between follow_inodes false and follow_inodes true.
For example, whether the same resource leakage occurs when follow_inodes false.
If there is no particular difference, we are fine with this for now.
Thanks!
Describe the bug
Fluentd tail plugin was outputting
If you keep getting this message, please restart Fluentd
. After coming across #3614, we implemented the workaround suggested there.follow_inodes
totrue
rotate_wait
to0
Since than we are not seeing the original
If you keep getting this message, please restart Fluentd
but still seeing lots ofSkip update_watcher because watcher has been already updated by other inotify event
.This is paired with a pattern of memory leaking and gradual increase in CPU usage until a restart occurs.
To mitigate this I added
pos_file_compaction_interval 20m
as suggested here but this had no affect on the resource usage.Related to #3614. More specifically #3614 (comment)
The suspicion is that some Watchers are not handled properly thus leaking and increasing CPU/Memory consumption until the next restart.
To Reproduce
Deploy fluentd (version v1.16.3-debian-forward-1.0) as a daemonset in a dynamic kubernetes cluster. Cluster is consisting of 50-100 nodes. This is the fluentd config:
Expected behavior
CPU / Memory should stay stable.
Your Environment
Your Configuration
Additional context
#3614
The text was updated successfully, but these errors were encountered: