Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak with single process #2512

Closed
aaassseee opened this issue Mar 27, 2019 · 21 comments
Closed

Memory leak with single process #2512

aaassseee opened this issue Mar 27, 2019 · 21 comments

Comments

@aaassseee
Copy link

Hi everyone, running my Huginn with single process container and external database image lead to memory leak. The peak of memory usage after one week with 35+ agent is 6 GB. May I ask is there any solution to solve high memory usage.

@dsander
Copy link
Collaborator

dsander commented Mar 28, 2019

That really is a lot of memory. You could help narrowing down the issue by disabling a few Agents and then check if memory usage is still that high. I have 119 Agents on my instance and it's using 700 MB combined (I am using the single-process images).

If you know about Agents that fetch particular large website or download files those would be my first suspect.

@aaassseee
Copy link
Author

if one of the agent will download a JSON file which is greater than 16MB size by command line agent. Will it affect?

@aaassseee
Copy link
Author

Moreover, except stopping agent to check which agent lead to memory leak. What else can I do to check where the memory leak occurs.

@dsander
Copy link
Collaborator

dsander commented Mar 29, 2019

if one of the agent will download a JSON file which is greater than 16MB size by command line agent. Will it affect?

No the memory usage is only affected by operations that are done in Ruby, if you the output of the ShellCommandAgent is also big it could be a factor though.

Moreover, except stopping agent to check which agent lead to memory leak. What else can I do to check where the memory leak occurs.

As far as I know there isn't a great way. I think the quickest test would be to disable all Agents/Scenarios, then restart the instance. While monitoring the memory usage enable one Scenario/Agent chain run it and check the memory usage. Then either restart the services or directly continue with the next Agent chain.

Does the memory usage climb beyond 6GB or does it creep up to that and then stays there?

@aaassseee
Copy link
Author

if one of the agent will download a JSON file which is greater than 16MB size by command line agent. Will it affect?

No the memory usage is only affected by operations that are done in Ruby, if you the output of the ShellCommandAgent is also big it could be a factor though.

True, the ShellCommandAgent output is outputting the whole 16MB JSON.
In addition, we also increased the database maximum storage size in each field to fulfil our usage.

@aaassseee
Copy link
Author

Does the memory usage climb beyond 6GB

It used up to 6GB because we limited the memory size of the container. I think it will climb more than 6GB without memory limitation.

@dsander
Copy link
Collaborator

dsander commented Mar 30, 2019

It used up to 6GB because we limited the memory size of the container. I think it will climb more than 6GB without memory limitation.

Not 100% sure about that, if the processes really need more memory to work it should cause out of memory issues.

Can you share the scenarios that cause your instance to use that much memory?

@aaassseee
Copy link
Author

Sorry for the late replay, I tried to stop some scenarios. And I observed the memory usage did increase slowly.

Here is the scenario.
sites-alive-monitoring.txt

And the memory usage diagram.
jqvUvsagXHVOMUkkSTFSOPOh
It seems that it just didn't release the memory only.

@aaassseee
Copy link
Author

In addition, Memory usage after disabled the scenario
lSgWelLEJDTFpTEEcXljdpQr

@dsander
Copy link
Collaborator

dsander commented Apr 7, 2019

Sorry for the late replay

Likewise 😄

In addition, Memory usage after disabled the scenario

Ok that looks more like a common memory usage schema of Huginn/RoR applications. The usage climbs but then plateaus at a reasonable amount.

I did see you are storing some information in the Agent memory in "Huginn memorise site status" is it possible that the memory of that Agents get really big (as in dozens or more MB)? You should be able to inspect the memory on the Agent detail page.

@aaassseee
Copy link
Author

I did see you are storing some information in the Agent memory in "Huginn memorise site status" is it possible that the memory of that Agents get really big (as in dozens or more MB)? You should be able to inspect the memory on the Agent detail page.

I didn't saw any large JSON in the memory. Here is the example:

{
  "ME": 0,
  "EKIEd": 0,
  "HKPhil": 0,
  "OEM": 0,
  "YM": 0,
  "the Patsy eshop": 0,
  "YM Balloon": 0,
  "Swan Select": 0,
  "Vcity": 0,
  "HKTDC Design Gallery": 0,
  "AFinder": 0,
  "Education-plus": 0
}

Will the memory usage climb due to the change of the agent memory every 1m?

@dsander
Copy link
Collaborator

dsander commented Apr 14, 2019

Sorry for the massive delay I have been traveling last week and thought I would get work done, but didn't 😄

Will the memory usage climb due to the change of the agent memory every 1m?

It shouldn't, the RAM required to run the Agent should correlate with the size of the memory of the Agent, your structure looks like it would only grow when new sites are added. In fact all the Agents of the Scenario don't look like they should cause a memory leak/bloat that big. Are you sure that the scenario you shared causes the high memory usage? You mentioned something that you handle megabyte big JSON files but I don't see the status monitoring Agents doing that.

@aaassseee
Copy link
Author

aaassseee commented Apr 15, 2019

Are you sure that the scenario you shared causes the high memory usage?

Yes, I am sure this agent cause the memory usage climb. You can also try it in your environment.

@dsander
Copy link
Collaborator

dsander commented Apr 15, 2019

Doing that right now, ill report back how the memory usage looks on my instance after a few hours.

@dsander
Copy link
Collaborator

dsander commented Apr 16, 2019

I am seeing the leak as well, guessing it's caused by the JavascriptAgent, but I need more time to dig into it.

@aaassseee
Copy link
Author

OK, thanks. Feels free to update me.

@dsander
Copy link
Collaborator

dsander commented Apr 17, 2019

Can you test out this image? huginnbuilder/huginn-single-process:pr-1961
I created #1961 ages ago for another reason, but it looks like the new javascript runtime fixes the memory leak we were getting with the old one. My instance is running for 7 hours and has a stable memory usage of about 210MB.

@aaassseee
Copy link
Author

It fixed my problem. However, will it be merged into the master branch? Or in case, will you image keeping update for including the new features?

@aaassseee
Copy link
Author

@dsander Thanks for the help. I will keep eyes on the memory usage.

@dsander
Copy link
Collaborator

dsander commented Apr 20, 2019

It fixed my problem. However, will it be merged into the master branch? Or in case, will you image keeping update for including the new features?

We just merged the fix, you can switch back to the normal image.

@aaassseee
Copy link
Author

@dsander Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants