Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNM concurrent NewFromEnv tests #1941

Closed
wants to merge 7 commits into from
Closed

Conversation

vaikas
Copy link
Contributor

@vaikas vaikas commented Jun 1, 2022

Summary

Follow up to @asraa investigation in #1938

Some comments in #1938 led me to believe that NewFromEnv is not thread safe, so playing around with some tests for it.

Note, the test fails with this currently because it modifies the expiredTimeStamp.

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x1031e2d58]

goroutine 423 [running]:
github.com/sigstore/cosign/pkg/cosign/tuf.TestUpdatedTargetNamesEmbedded.func3({0x14000636000, 0x2cd, 0x2cd})
	/Users/vaikas/projects/go/src/github.com/sigstore/cosign/pkg/cosign/tuf/client_test.go:382 +0x98
github.com/sigstore/cosign/pkg/cosign/tuf.initializeTUF({0x1034dd938, 0x140001a8008}, {0x14000dce0d8, 0x11}, {0x0, 0x0, 0x0}, {0x1034d8728?, 0x1034e2aa0}, 0x0)
	/Users/vaikas/projects/go/src/github.com/sigstore/cosign/pkg/cosign/tuf/client.go:283 +0x3b0
github.com/sigstore/cosign/pkg/cosign/tuf.NewFromEnv({0x1034dd938, 0x140001a8008})
	/Users/vaikas/projects/go/src/github.com/sigstore/cosign/pkg/cosign/tuf/client.go:309 +0x160
github.com/sigstore/cosign/pkg/cosign/tuf.TestConcurrentAccess.func1()
	/Users/vaikas/projects/go/src/github.com/sigstore/cosign/pkg/cosign/tuf/client_test.go:634 +0x6c
created by github.com/sigstore/cosign/pkg/cosign/tuf.TestConcurrentAccess
	/Users/vaikas/projects/go/src/github.com/sigstore/cosign/pkg/cosign/tuf/client_test.go:632 +0x3c

But if you run the tests with the -v so it doesn't run them in parallel, it fails sort of how I was expecting it to fail:

go test -v ./pkg/cosign/tuf/...
<snip>
Creating new TUF obj
    client_test.go:636: Failed to construct NewFromEnv: creating new local store: creating cached local store: resource temporarily unavailable
    client_test.go:639: Got back nil tufObj
Creating new TUF obj
    client_test.go:636: Failed to construct NewFromEnv: creating new local store: creating cached local store: resource temporarily unavailable
    client_test.go:639: Got back nil tufObj
Creating new TUF obj

Ticket Link

Fixes

Release Note


asraa and others added 4 commits May 31, 2022 17:12
Signed-off-by: Asra Ali <asraa@google.com>
Signed-off-by: Asra Ali <asraa@google.com>
Signed-off-by: Asra Ali <asraa@google.com>
Signed-off-by: Ville Aikas <vaikas@chainguard.dev>
@asraa asraa mentioned this pull request Jun 1, 2022
Signed-off-by: Ville Aikas <vaikas@chainguard.dev>
Signed-off-by: Ville Aikas <vaikas@chainguard.dev>
Signed-off-by: Ville Aikas <vaikas@chainguard.dev>
@vaikas
Copy link
Contributor Author

vaikas commented Jun 2, 2022

@asraa I've obviously screwed something up since the unit tests are failing, and it's not very clean, but the one thing to point out is that I reverted the preference for fetching from Rekor before using the TUF roots that we did in #1921 so at least this may be one way to get to the shared TUF usage, since the CIP tests are now passing with a mostly reverted tlog.go code. I'm still wrapping my head around the various caches and stores and elephants oh my, but just wanted to share what I was playing with.
This removes the need to call Close on TUF and possibly points out to something akin like a 'Refresh' method on TUF itself. Kind of like the 'underlying' tuf client has an Update method.

Copy link
Contributor

@asraa asraa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

THANK YOU!

One comment about this design:
If there was out of date metadata and targets, then we would not update the local cache.

I think using only in-memory during the lifetime of the program (outside of updates) is the correct thing to do. I'll give this type of pattern a try, but with syncing with the on-disk cache when an Update is issued.

@vaikas
Copy link
Contributor Author

vaikas commented Jun 3, 2022

I think all the bits from here have been hoisted into #1953 so once that gets done we can close this, or I suppose since the commits have made it there, we might be able to close this already?

@vaikas
Copy link
Contributor Author

vaikas commented Jun 7, 2022

Everything from here that was worth bringing has been brought to #1953

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants