New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory bloat or leak after upgrading to 4.2.5 #981
Comments
Do you use hiredis or the ruby driver?
There's two ways you could go about this I think. If this memory growth isn't too impactful to your platform, you can bisect the redis-rb versions. Of course in that range there's #880 which rings a bell, which sparked some increased memory usage complaints that were a bug in Sidekiq. Might be worth checking if you did hit the same issue. Now in term of actually debugging memory issues, |
Thanks for your response. We do not use The memory growth is tolerable, so I will try out some other versions. It looks like #880 was in 4.1.4, so I'll start with 4.1.3 and/or 4.1.4. I read through #880 and it sounds like it shouldn't apply to us because we don't use Sidekiq Pro, but of course you never know. One more piece of info: I did try 4.2.5 with the latest Sidekiq (6.1.3) and I saw the same memory problems. That was actually the reason I upgraded redis versions in the first place, because Sidekiq 6 requires redis 4.2. I upgraded them together and that's when I first saw the memory issue, so then I reverted everything and systematically upgraded individual gems, which is how I isolated it to redis. |
I'll close this since the entire connection code has been replace on master for the upcoming 5.0. Feel free to comment if somehow this is still a problem after upgrading to 5.0 (to be released soon). |
I recently upgraded our app from 4.0.3 to 4.2.5 and our memory use in production immediately went up by about 40%. I rolled back the change and the memory went back down. Nothing else changed between those releases, so it seems the difference must be due to something in this gem.
Some more info about the app:
Ruby 2.7.2, Rails 5.2.4.5, Sidekiq 5.2.6, Puma 5.2.2
Hosted on Heroku.
Heroku Redis v4.0.14.
In our app redis is used almost exclusively by Sidekiq. It's also used by Flipper. The application code doesn't directly touch redis, it only does through those two gems.
Interestingly, the change in memory only happened on our web dynos (which run the Rails server with Puma). The Sidekiq worker dynos were unchanged. Here's a look at the memory on the web dynos over 24 hours where I upgraded and then downgraded again.
Prior to the above graph I was running 4.2.5 for a week, and for the entire week the web dynos were running at the higher memory level, so it was not just isolated to the ~7 hours shown above, but this is the clearest picture of the issue in a single graph. I'm inclined to say that this is bloat rather than a leak, because with 4.2.5 it does level off after running long enough, it's just that it levels off much higher than when running 4.0.3
I tried running
memory_profiler
in my local environment with both versions of the gem (and also 4.1.1) to see if it would show an increase, but the results were roughly identical. Here is the approach I used:Most of the app's web traffic is webhooks where the web process does nothing except drop the data into redis via sidekiq to be processed by the sidekiq workers. So to simulate this I ran
memory_profiler
in the console and put 10,000 jobs into sidekiq. I thought I might see more allocated or retained memory with 4.2.5, but the results were nearly identical. I'm not an expert at usingmemory_profiler
so if anyone has any suggestions about other ways to investigate this I'm all ears.I also tried looking into allocations and memory on New Relic and Scout but wasn't able to find anything of note. On Scout the allocations didn't look any different while on 4.2.5.
I'm not sure where to go from here, but based on what I've seen in production it seems like it must be an issue in this gem. Thanks in advance for any help or advice. Let me know what information would be useful or what else I should try.
The text was updated successfully, but these errors were encountered: