New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When the transient high traffic impacts, the memory usage of the internal buffer pool of Logrus is high. #1124
Comments
hello @edoger, that's actually the point of using a sync.Pool here in order to release the pressure on the garbage collector. I'm not sure if this common implementation is that efficient though. |
@dgsb The actual situation is tested, and resetting the buffer before placing it in the pool can effectively alleviate this particular problem.
|
I'm not sure to understand how it would solve the issue. As far as I understand the |
@dgsb After resetting the buffer, the occupied memory can be quickly released by the GC. If you do not do this, the memory will be occupied until it is taken out of the pool again. After a system has a large traffic peak, if no subsequent requests arrive (For example, the service is degraded), the objects in the pool will not be used for a long time, then the log content they hold will always remain in memory. |
We can try to do that, but what I understand from the documentation in
|
@dgsb This code shows that the memory was released by the GC.
Output:
|
This code is similar to the scenario we encountered, the memory was not released quickly, and our alarm continued until the service was restarted. @dgsb
|
with your first example, I have the same output on memory usage with or without the |
Yes, this program is too simple. The compiler or GC may be smart enough. I added a second program, which is actually the scene encountered. |
Yes I've tried this one too, adding a |
I didn't check the implementation of the bytes.Buffer type, but the documentation is quite explicit about keeping the underlying storage for future write. We can add the Reset call in the defer function, but I guess that won't solve your problem. I think that the buffer pool is too generic for everybody usage and we should provide somehow an interface which would allow to plug a specific free item list management system |
@edoger what do you think about providing a system for the library consumer to implement its own free item list ? Would that meet your need if you have strict memory constraint ? |
@dgsb This may require defining a resettable interface and a buffer pool that can control the number of objects. |
I don't think we need to do something to replace the bytes.Buffer object. Depending on your needs you may even implements something which does not keep track at of a free item pool/list. You may:
But having a |
The patch we are using in our production environment is to create a new buffer object every time, and discard it directly after use. Compared with the overhead of GC, the memory problem is more worthy of attention. I checked some information, it is very necessary to do the necessary cleanup before the temporary object is put into the pool (the Reset method of the buffer creates a new slice, and the array referenced behind has not changed. May need further confirmation), after Go1.12, the life cycle of the objects in the pool has been extended. |
Can we provide a setting option to control whether to use the buffer pool? |
Indeed, the current buffer pool implementation with the reset moved before the put would be the default configuration for the package. |
In our production environment, when the transient high traffic impact occurred, we found that the memory occupied by the buffer pool inside Logrus could not be released quickly.
Objects in the buffer pool are not cleaned up when released.
The text was updated successfully, but these errors were encountered: