-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #857
Comments
Thanks @bsandmann - our agents running in Kubernetes haven't seen similar behaviour, this isn't to say there isn't a leak, it might just be the constraints and garbage collection are hiding it - checking the |
Update: I've run agents using the docker compose file included within the repo for several days at a time. I can observe a build up of memory usage but garbage collection kicks in and reduces it, the GC saw-tooth pattern is present so all looks good. I don't want to close this just yet until I've done a specific test to concretely prove there is no leak - as such - I'll be soak testing some agents towards the end of this week and I'll post the results on this ticket. I'll leave the agents idle for 24 hours, soak test for 24 hours (if possible, may need to be shorter) and then leave idle again for 24 hours |
Update: I've had issues running these tests (due to local hardware, nothing to do with Agent) and need to repeat them |
tl;dr To investigate a memory leak I care about the part where this graft is stable. After that: I want to know how frequently does the garbage collection runs. @davidpoltorak-io So my suggestion would be to limit the memory of the JVM to 4/5 or 6 GB (the minimum for the system to be stable and where the garbage collection starts working). Let's print GC stats/details. Not sure what is the right flag we need to start the JVM to print this information. Maybe |
@davidpoltorak-io , do you have a contributor account to reassign this issue to? cc @yshyn-iohk @mineme0110 |
@bsandmann, we have a screenshot from the Grafana dashboard: And a similar screenshot for the same agent over the last seven days: These pictures don't look like a memory leak issue. Could you share how you run the agent and any other essential details for reproducing it? |
@yshyn-iohk Thanks for taking a look at it. I'm following the Quick-Start instructions without any modifications on an Ubuntu 22.04 installation. I haven't looked deeper into the issue yet, but I've noticed this behavior on similar setups with one or multiple agents. Some ideas:
|
please run What I think may be happening is docker itself basically eating all the available RAM on the vm, you can limit how much ram can docker use of course, but by default it will be happy to use all that's available and reserve it for the containers. https://docs.docker.com/config/containers/resource_constraints/ I realized I have the same problem on my test deploy :) |
Thanks, @bsandmann, for the additional information! |
@yshyn-iohk Here is an additional data-capture from the VM:
|
Is this a regression?
Yes
Description
It seems like the prism-agents (tested on 1.19.1) have a memory leak. I’m running a single agent on a Ubuntu 22.04 using the “docker-compose.yml” setup as described on the readme of the Github page. The memory usage of that single agent was slowly increasing around 1GB a day – and that’s only a single agent at idle with no user-interaction at all. I haven’t investigated, if it is the agent, the node or some other component, but it’s something someone should look into. See screenshot. I’m currently also testing 1.24.0, but looks like, it’s also leaking at the same rate.
Please provide the exception or error you saw
No response
Please provide the environment you discovered this bug in
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: