You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flushing the buffer fails with RangeError - "bignum too big to convert into `long long'".
To Reproduce
Not exactly repro, but this is my conf setting. I'm not sure what bignum is too big, I guess some data comes across from time to time that is bigger than long long, I guess that passes through JSON parser well, but I'm not sure why it fails in mongo buffer.
Expected behavior
Anything else would be better behavior for me, to set -1 instead of real value, to set the biggest long long value, to give some replacement somehow, to remove that row completely , anything except for whole chunk failing constantly. It seems that this is leading to fluentd getting stuck after some time.
Even more, I cannot understand all the buffer settings. If I remove retry_forever and retry_timeout of 24h (86400), I still don't get the issue chunks to be deleted, as these settings seem not to be on chunk level? I have chunks queued for months that keep failing, some even from last year, and they cannot get deleted because these settings are not on chunk level, whenever some chunk is flushed as expected retry_times, next_retry_time, everything is reset to the init state for all chunks, even the issue ones?
Just happened again, fluentd got completely stuck, this time in minikube. Same, it was reporting 'bignum too big' for two chunks, and when I removed the chunks and restarted fluentd, it was unstuck (simple restart didn't do it, had to remove those chunks before restart). Chunks in question are attached, pleasae advise, as I don't know what to do to mitigate this issue. logs.zip
Describe the bug
Flushing the buffer fails with RangeError - "bignum too big to convert into `long long'".
To Reproduce
Not exactly repro, but this is my conf setting. I'm not sure what bignum is too big, I guess some data comes across from time to time that is bigger than long long, I guess that passes through JSON parser well, but I'm not sure why it fails in mongo buffer.
Expected behavior
Anything else would be better behavior for me, to set -1 instead of real value, to set the biggest long long value, to give some replacement somehow, to remove that row completely , anything except for whole chunk failing constantly. It seems that this is leading to fluentd getting stuck after some time.
Your Environment
Your Configuration
Your Error Log
Additional context
It is running in digital ocean kubernetes, as DaemonSet.
The text was updated successfully, but these errors were encountered: