Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: index out of range [-1] with context logger #443

Closed
afarbos opened this issue Jun 7, 2022 · 4 comments
Closed

panic: runtime error: index out of range [-1] with context logger #443

afarbos opened this issue Jun 7, 2022 · 4 comments

Comments

@afarbos
Copy link

afarbos commented Jun 7, 2022

Hello, I encounter a similar issue as #422, #283, #249 and #239.
However, I am always relying on the context logger so I don't see how I would be reusing a similar event or underlying resources expect the logger itself.

Example usage:

for thereIsMore  {
   // some code
   go func() {
      // more code
      log.Ctx(ctx).Info().Str("foo", "bar").Msg("baz")
   }
}

I am not sure how this panic is possible today. The buffer seems to always be set looking at the code. I printed the zerolog.Context and it was never empty.

My stacktrace look like:

null: runtime error: index out of range [-1]
  File "/go/pkg/mod/github.repo/org/go-libs/logging@v1.0.0/errorreporter.go", line 74, in ErrorReporterStackMarshaler
  File "/go/pkg/mod/github.com/rs/zerolog@v1.26.1/event.go", line 374, in (*Event).Err
  File "/go/pkg/mod/github.repo/org/go-libs/logging@v1.0.0/utils.go", line 23, in panicFallthrough
  File "/go/pkg/mod/github.repo/org/go-libs/logging@v1.0.0/utils.go", line 46, in Recover
  File "/go/pkg/mod/github.repo/org/go-libs/logging@v1.0.0/http.go", line 138, in RecoverMiddleware.func1.1
  File "/go/pkg/mod/github.com/rs/zerolog@v1.26.1/internal/json/base.go", line 15, in Encoder.AppendKey
  File "/go/pkg/mod/github.com/rs/zerolog@v1.26.1/event.go", line 247, in (*Event).Str
  File "/go/pkg/mod/github.repo/org/go-libs/logging@v1.0.0/hook.go", line 21, in CallerHook.Run
  File "/go/pkg/mod/github.com/rs/zerolog@v1.26.1/event.go", line 134, in (*Event).msg
  File "/go/pkg/mod/github.com/rs/zerolog@v1.26.1/event.go", line 108, in (*Event).Msg
  File "/go/src/github.repo/org/myrepo/pkg/mypkg/mypkg.go", line 109, in (*MyObject).MyFunc
    Msg("baz")
@afarbos
Copy link
Author

afarbos commented Jun 21, 2022

I found the issue is not related here to using multiple time the same event but using the same logger across multiple goroutines.
Could we update the documentation in general and of theWith method to explicitly explain this?

@denchenko
Copy link

@afarbos did you find solution? I see some similar panics in logs, but very rare.

@afarbos
Copy link
Author

afarbos commented Sep 28, 2022

Yes, It started similarly for me, rare and then became a bigger issue.

The short story is zerolog is not thread/async-safe.
In my case, multiple goroutine were accessing the same underlying context containing the same logger and so creating data races. To fix it, create a new logger each time your start a goroutine.

@ankitrgadiya
Copy link

ankitrgadiya commented Sep 26, 2023

It may not be the case with everyone, but I found one conclusive scenario where this will always happen. Zerolog internally uses sync.Pool to create a pool of events. In calls like log.Info(), a new event is retrieved from the pool and re-initialized. This event is returned back to the pool in the e.Msg() or e.Send() calls and should not be used again from anywhere else. However, from whatever I could find this behaviour is not documented. Interestingly this event pool is global so it will be shared even if you create multiple top-level loggers.

Now generally while using the Logger, Method chain is suggested that automatically tackles this. However, the Func methods are where there is a possibility of re-use of events. See this block for example.

l.Debug().Func(func(e *zerolog.Event) {
	e.Int("client-status-code", resp.StatusCode).Msg("Response Status")
	headers := zerolog.Dict()
	for k, v := range resp.Header {
		headers.Strs(k, v)
	}
	e.Dict("client-headers", headers).Msg("Response Headers")
})

In this case, the Event will be returned to the Pool after the first e.Int...Msg() call. In case multiple goroutines are using Zerolog (sub-loggers or top-level loggers), there is a possibility that it might get the same event and starts using it. Meanwhile the next e.Dict...Msg() call will also try to use the underlying buffer of the same event. This will cause the Data race usually and can cause panics in certain cases.

The following test-case when running with -race flag reports race every time and panics a lot of times.

package main

import (
	"context"
	"os"
	"sync"
	"testing"

	"github.com/rs/zerolog"
)

func TestZerologRace(t *testing.T) {
	ctx := zerolog.New(os.Stdout).WithContext(context.Background())

	var wg sync.WaitGroup
	wg.Add(2)

	go func(ctx context.Context) {
		defer wg.Done()
		for i := 0; i < 100000; i++ {
			new := zerolog.Ctx(ctx).With().Logger()
			new.Info().Func(func(e *zerolog.Event) {
				e.Msg("This is it!")
				e.Msg("This is it!")
			})
		}
	}(ctx)

	go func(ctx context.Context) {
		defer wg.Done()
		for i := 0; i < 100000; i++ {
			new := zerolog.Ctx(ctx).With().Logger()
			new.Info().Func(func(e *zerolog.Event) {
				e.Msg("This is it!")
				e.Msg("This is it!")
			})
		}
	}(ctx)

	wg.Wait()
}

The solution is simple, you just need to not re-use the events and in the given example, call the Func twice instead.

l.Debug().Func(func(e *zerolog.Event) {
	e.Int("client-status-code", resp.StatusCode).Msg("Response Status")
})
l.Debug().Func(func(e *zerolog.Event) {
	headers := zerolog.Dict()
	for k, v := range resp.Header {
		headers.Strs(k, v)
	}
	e.Dict("client-headers", headers).Msg("Response Headers")
})

JAORMX added a commit to stacklok/minder that referenced this issue Apr 11, 2024
We were seeing a panic, and it's because we're reusing events in
zerolog [1]... this stops doing that for that function.

[1] rs/zerolog#443

Signed-off-by: Juan Antonio Osorio <ozz@stacklok.com>
JAORMX added a commit to stacklok/minder that referenced this issue Apr 11, 2024
We were seeing a panic, and it's because we're reusing events in
zerolog [1]... this stops doing that for that function.

[1] rs/zerolog#443

Signed-off-by: Juan Antonio Osorio <ozz@stacklok.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants