Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High and Unexpected Memory surge #886

Closed
azamshaikh-ai opened this issue Dec 3, 2020 · 8 comments
Closed

High and Unexpected Memory surge #886

azamshaikh-ai opened this issue Dec 3, 2020 · 8 comments

Comments

@azamshaikh-ai
Copy link

I have implemented log using a wrapper for both zap(sugared) and logrus.

Main log file has struct defining a variable of Logger interface implementing all the logging function and other variable in struct include config to be used to initalize respective log library.

//Logger struct to hold reference to new instances
type mainLogger struct {
	log    Logger
	config Configuration
}

type Logger interface {
	Trace(format string, args ...interface{})

	Debug(format string, args ...interface{})

	//All other function to be followed here
}

Initializing zap logger like this:

func NewLogger(config Configuration, loggerInstance int) (Logger, error) {
//switch on instance, 
//case zap sugared logger is like this
		l, err := newZapLogger(config)
		if err != nil {
			return nil, err
		}
		return l, nil
}

Interface implementation as follows:

func (l *mainLogger) Trace(format string, args ...interface{}) {
	//to think
	l.log.Debug(format, args...)
}

func (l *mainLogger) Debug(format string, args ...interface{}) {
	l.log.Debug(format, args...)

}
//Other functions
//Inside main
Log, _ := logger.NewLogger(loggerConfig, 1)//1 for zap sugared

However this does work when sending single request, but on production with zap production sugared logger on trace/debug mode, both console and file uses heavy memory increasing from 0-10%, some time also crashes the app and latency goes over 2-3 minute, some function which contains few variable initlaization takes 16000 seconds to process as compared to 1-2ms when processed without logging on or using single manual request. Logging 1-2 log in each request works great. But logging in trace/debug mode is pain to memory.

What could be wrong here in my approach which has to be a low latency app with proper logging at various stages.
Have checked #851, not sure if that's works as I am still having issue with the memory usage.

@phuslu
Copy link

phuslu commented Dec 4, 2020

for low latency app, I recommend my highly-optimized logging https://github.com/phuslu/log
(handles nearly 1m QPS in production as logging part)

@prashantv
Copy link
Collaborator

Unfortunately we cannot investigate this issue without more details. Ideally, a minimal, reproducible case would be ideal. Alternatively, pprof memory/CPU profiles would be helpful.

@phuslu
Copy link

phuslu commented Dec 4, 2020

For OP case, I suppose the root cause is that zap/zerolog printf style API is fmt.Sprintf, (I encountered this issue in real world before), should use fmt.Fprintf with a buffer pool instead.

Here is my implementation, https://github.com/phuslu/log/blob/9f855b33c3ba967ea71dd71e67f709eacd0f06f8/logger.go#L1171-L1186

And a playground example: https://play.golang.org/p/B86nSTxiiYy

@azamshaikh-ai
Copy link
Author

azamshaikh-ai commented Dec 4, 2020

@phuslu Thanks for recommendation, will test and try to implement the logger you shared. Does your logger support printf styling logs as I have tried the zap structured logger with no success it also has the heavy memory issue.
If your logs can be used in printf styling there will be minimum changes for me in my implementation.
Also in meanwhile can I do any changes to fix the issue in zap?

@azamshaikh-ai
Copy link
Author

azamshaikh-ai commented Dec 4, 2020

@phuslu thanks for example. Hoping the 20% performance difference of your sugared logger as mentioned on repo doesn't affect the latency of the app. Checking out other examples on the repo.

@phuslu
Copy link

phuslu commented Dec 6, 2020

Hi @azamshaikh-ai Have you solve your memory usage issue yet? I'd like confirm my assumption between fmt.Sprintf and fmt.Fprint with a buffer pool is right or not. (Whether you use phuslog or not)

@azamshaikh-ai
Copy link
Author

Not yet, plan is to implement phuslog using a wrapper in this week. The issue only comes when logging in trace/debug mode. Worst case I will be using flag( config driven parameter) to log only a few line per request.

@prashantv
Copy link
Collaborator

We avoid unsafe in zap, and the Core API uses strings, so we cannot do a 0-alloc sugar logger.

For performance-sensitive cases, we recommend the non-sugared logger.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants