Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of kubernetes_metadata_filter plugin #179

Closed
apu1111 opened this issue Jun 7, 2019 · 4 comments
Closed

Performance of kubernetes_metadata_filter plugin #179

apu1111 opened this issue Jun 7, 2019 · 4 comments
Labels

Comments

@apu1111
Copy link

apu1111 commented Jun 7, 2019

Hi,
Wanted to know if there is any performance measurement data available for this plugin anywhere.
Did anyone faced any performance issues running this in fluentd?
What we have noticed in our testing running this plugin is not at all encouraging.
We were testing performance of fluentd which runs as a daemon in a k8s cluster based on fluent/fluentd-benchmark and it gave almost the numbers it promised but the moment we added kubernetes_metadata_filter plugin, the performance significantly went down.

Setup:

Application writes logs at 20k lines per second which is read by fluentd using tail and then displays the msg per second count.

Outcome:

Without kubernetes_metadata_filter, fluentd was able to read at the same rate which is 20k lines per sec, but with kubernetes_metadata_filter added in config, it comes down to 4k lines per second. This is in single worker setup.

Config:

image

@apu1111
Copy link
Author

apu1111 commented Jun 7, 2019

Version:

root@fd-log-router-8mqzw:/home/fluent# fluentd --version
fluentd 1.4.2
root@fd-log-router-8mqzw:/home/fluent# gem list | grep kube
fluent-plugin-kubernetes (0.3.1)
fluent-plugin-kubernetes_metadata_filter (2.1.6)
fluent-plugin-kubernetes_sumologic (2.3.1)
kubeclient (1.1.4)
root@fd-log-router-8mqzw:/home/fluent#

Logs:

{"log":"2019-06-07 13:10:47 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:10:47.042130594Z"}
{"log":"2019-06-07 13:10:53 +0000 [info]: #0 stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4\n","stream":"stdout","time":"2019-06-07T13:10:53.088928762Z"}
{"log":"2019-06-07 13:10:54 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:10:54.059843922Z"}
{"log":"2019-06-07 13:12:19 +0000 [info]: #0 stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4\n","stream":"stdout","time":"2019-06-07T13:12:19.09189152Z"}
{"log":"2019-06-07 13:12:20 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:12:20.074740334Z"}
{"log":"2019-06-07 13:12:31 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:12:31.100833374Z"}
{"log":"2019-06-07 13:12:51 +0000 [info]: #0 stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4\n","stream":"stdout","time":"2019-06-07T13:12:51.090324568Z"}
{"log":"2019-06-07 13:12:52 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:12:52.081318301Z"}
{"log":"2019-06-07 13:14:37 +0000 [info]: #0 stats - namespace_cache_size: 1, pod_cache_size: 4, namespace_cache_api_updates: 4, pod_cache_api_updates: 4, id_cache_miss: 4\n","stream":"stdout","time":"2019-06-07T13:14:37.091158847Z"}
{"log":"2019-06-07 13:14:38 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:1\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:14:38.048769723Z"}
{"log":"2019-06-07 13:14:39 +0000 [info]: #0 plugin:out_flowcounter_simple\u0009count:3\u0009indicator:num\u0009unit:second\n","stream":"stdout","time":"2019-06-07T13:14:39.051537928Z"}

@richm
Copy link
Contributor

richm commented Jun 7, 2019

I don't know about any performance measurement data available for this plugin. I'm not familiar with ruby performance measurement. Any help would be appreciated.

@jcantrill
Copy link
Contributor

related to #39

@jcantrill
Copy link
Contributor

Closing in lieu of merging #347 where was able to identify some valuable changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants