Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Common test suite strategy #15

Open
znewman01 opened this issue Nov 11, 2022 · 1 comment
Open

Common test suite strategy #15

znewman01 opened this issue Nov 11, 2022 · 1 comment

Comments

@znewman01
Copy link
Contributor

znewman01 commented Nov 11, 2022

sigstore/cosign#2434 proposes adding "conformance testing" to Cosign (previously implemented in sigstore-python).

I think this is a great idea and we've been thinking about it in the abstract in this repository. I'd like to use this issue to discuss strategy details. What kinds of tests should be be running across all Sigstore clients? How should we architect those?

CC @tetsuo-cpp @di @woodruffw

See also some context from the linked Cosign issue.


Hey @znewman01, thanks for the ideas and feedback.

Appreciate the additional context; I understand much better what you're trying to achieve here. I think overall our goals are aligned and I'm optimistic that we can come up with a plan we're both happy with (and again, I'm still open to merging the CLI tests as-written and even keeping them long-term, I just want to make sure that's part of a coherent conformance testing strategy).

Or were you thinking more that the conformance test suite should just provide a set of test inputs and expected outputs and leaving the actual test runner logic to each client's unit tests?

Precisely. IMO it's way more work to add new shell script-y test suites than to just have each client import standard test data. I also think the performance benefits of avoiding creating are im

It's also not clear to me that every language client should have a CLI.

I think that could be complicated because we'll want to test interactions with Fulcio, Rekor and eventually mock out those services to test scenarios like getting a broken inclusion proof, SCT, etc.

Hm...my imagination is that every CLI should be a pretty thin wrapper around a library interface, so anything you could do in a shell script (spin up a fake Fulcio server and point your tool at it) could happen inside a language-specific test harness just as easily (and integrate better with the repo's existing tests).

I guess this is mostly my personal biases—I worked full-time on a CLI for a while, and always found CLI-level testing to be a really blunt instrument, brittle, and slow. Further, architecting a CLI for testability without going through the CLI interface itself means that your library is much more usable as a library and that the tests are clearer (assertions don't require regexes and parsing STDOUT). Not sure—am I being at-all convincing here that CLI tests are something to avoid when possible?

I see value in using Protobuf messages but, the way I see it, they can complement the current approach. So at some point when these Protobuf definitions become standardised, we could simplify the CLI protocol and instead ask that clients expose some kind of --protobuf flag that takes an Input rather than the flags that we're asking for now.

Certainly the protobufs work really well for testing verification. I think it's doable to start expressing full sign+verification flows declaratively too, with faked responses—and in fact, I think a proto-centric approach makes the most sense for faking.

Maybe we could split the difference:

  • Move as much configuration and as many test cases as possible into a standard (declarative) specification.
    • Verification-only cases are probably the easiest.
    • But you could imagine entire sign+verify flows with requests/responses.
    • This doesn't need to be in protobufs exclusively, but I'd strongly prefer a language-agnostic format to actual code.
  • Have drivers/harnesses to run these shared test cases.
    • They'd be responsible for requests/responses, feeding test cases into the library, and checking that we get the right results.
    • One such harness (the first one) could be a CLI harness, which could spin up fake servers and execute commands.

This may be overengineering for now, and it seems reasonable to start with everything in the "CLI harness" but perhaps with a longer-term vision of moving first the verification cases, then everything else, into the "declarative test specification" layer.

Regardless, I think the Cosign repo isn't quite the right place for this issue—would you want to open one up in https://github.com/sigstore/protobuf-specs which seems to be the home of most cross-client concerns?

This issue was specifically to add the existing GitHub Action to cosign's CI so that's why I've opened it here. At the moment, we're wanting to pilot it with clients other than sigstore-python (the client I usually work on).

In the medium-term, I'd expect "common test suite" to be tracked mostly over in the protobuf-specs repo, but it makes sense to have per-client repo issues as well. Maybe we can keep this issue open to track the integration for Golang, and I'll file another issue for the strategy/philosophy discussion.

(Part of the reason I want to move it out of this repo too is that at some point these tests would move over to https://github.com/sigstore/sigstore-go because that's where the core Sigstore logic for Go will live, and Cosign will mostly focus on OCI/container signing.)

Originally posted by @znewman01 in sigstore/cosign#2434 (comment)

@znewman01
Copy link
Contributor Author

The above is very verbose; the current plan is:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant