New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase readpartial() from 1KB to 16KB #880
Conversation
All of which deal with HTTP request / responses, so not sure it is a proper argument. However digging into So 👍 |
I'm not against making it configurable, but what concerns me more is how it could so significantly increase you memory usage. This is just a buffer, it's supposed to be consumed quickly, it shouldn't hold memory for that long. |
One of my thoughts is that pulling off 16.5kb of data is going to overallocate 15.5kb of memory (which is accumulating over time for some reason) and thus the increase of memory. If you have any ideas on where I can investigate to see what is happening, that would help short circuit things on my end a bit. |
(answering from another computer)
Well, if it's accumulating, the bug would be that it accumulate. I don't see how 1 chunk of 16K would be "accumulating" faster than 4 blocks of 4K. I dumped the buffer on my machine to see: setup: Redis.new.set("foo", "bar" * 10_000) After reading from the socket, {"address":"0x7faf5603eb68", "type":"STRING", "class":"0x7faf520bb5b8", "bytesize":16384, "capacity":24575, "value":"$30000\r\nbarbarb........barbarbar", "encoding":"UTF-8", "memsize":24616, "flags":{"wb_protected":true}} But just after it's {"address":"0x7faf5603eb68", "type":"STRING", "class":"0x7faf520bb5b8", "shared":true, "encoding":"UTF-8", "references":["0x7faf56037070"], "memsize":40, "flags":{"wb_protected":true}} We see that Ruby is smart and simply point to a shared empty string. We can assume the allocated string pointer was freed (I'll double check, but I highly doubt such an obvious bug wouldn't have been fixed and reported yet). So here's a bunch of theories: 1 - Connection leaksMaybe somehow you are leaking A way to verify this would be to get a shell into a ruby process using rbtrace, and then check If you can prove this is happening, then I'd be extremely interested in debugging this forward with you. 2 - Memory fragmentationIf you are using the default memory allocator, untuned, and sidekiq with many threads, then you might very well experience a lot of memory fragmentation.
Other questions
|
Thanks for the thorough answer. I missed today's release so it will be tomorrow before I can try out rbtrace specifically in production. Answers
|
That last graph is much less concerning. The RSS seem to stabilize around 1GB, which makes much more sense and is quite typical. The average app have a bunch of code and data that isn't properly eagerloaded, so there some memory growth on the first few requests. Plus it takes some time to reach a request that allocates quite a lot, and after that memory isn't reclaimed. If anything this graphs makes the previous one much more concerning. Your sidekiq processes are using several times more memory than your web processes, that's not normal (unless they use much more threads), and they don't seem to stabilize. You either have a nasty leak in your sidekiq processes, or excessive fragmentation. |
@byroot I just wanted to follow up that the big issue was a memory leak in sidekiq/sidekiq#4652 I think the above configuration change made it explode for us due a bunch of ruby internal objects that look like:
In total those IMEMO objects had 183 counts:
I don't know if any of the above proves useful for things that are related soley to redis-rb, but I wanted to share just in case. Anyways, thanks a huge bunch with the initial pointers! |
Thanks for sharing, glad you figured out your issue. |
I noticed the redis-rb gem is using 1KB per readpartial() call. This is considered very small in comparison to something like Mongrel, Unicorn, Puma, Passenger, and Net::HTTP, of which all of them has 16KB as their read length.
Wanted to see what are your feedbacks on this.