Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(refactor): externalize and improve the source package #630

Conversation

everettraven
Copy link
Contributor

Description

  • Moves internal/source --> pkg/source so that the rukpak unpacking logic can be imported and used in other projects
  • Updates the implementations in the source package to be more library user friendly

Motivation

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 12, 2023
@everettraven
Copy link
Contributor Author

I'm a bit confused by this lint error: https://github.com/operator-framework/rukpak/actions/runs/5259375486/jobs/9504861526?pr=630#step:4:32

I've run goimports and there were no changes to anything to push so I'm not sure how to fix this. Unfortunately I can't run the linter locally as every time I run make lint it ends up crashing my terminal so 🤷

@everettraven
Copy link
Contributor Author

A good follow-up to this PR would be to add tests for the individual unpacking implementations. I left that out for this PR as that felt out of scope for externalizing this logic.

@everettraven everettraven marked this pull request as ready for review June 13, 2023 19:12
@everettraven everettraven requested a review from a team as a code owner June 13, 2023 19:12
@everettraven everettraven changed the title WIP: externalize and improve the source package (refactor): externalize and improve the source package Jun 13, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 13, 2023
@everettraven
Copy link
Contributor Author

After working on operator-framework/catalogd#94 there are some more changes I want to make to this. Holding and turning back to WIP

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 14, 2023
@everettraven everettraven changed the title (refactor): externalize and improve the source package WIP: (refactor): externalize and improve the source package Jun 14, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 14, 2023
SourceTypeHTTP SourceType = "http"
)

// TODO: Can we make source a bit more generic and composable/extensible?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking something like:

type Source struct {
    Type SourceType `json:"type"`
    Config json.RawMessage `json:"config"`
}

This would allow for different configurations to be parsed and validated by the individual unpacker types similar to how this was done for the opm composite template in their respective builders

However, this ends up being a runtime validation rather than one that could occur on the CRD, unless we created a configurable webhook implementation that other projects could reuse for validating CRs with this source type before admitting it.

I'm wondering if the extensibility/configurability feature is valuable enough to warrant requiring users of the source package to implement an admission webhook if they want validation before their CR is created.

If we do decide it is worth it, I think this should be done in a follow up as there would likely be changes across the repo to support this and I feel like this PR is already relatively large

Signed-off-by: Bryce Palmer <bpalmer@redhat.com>
@everettraven everettraven changed the title WIP: (refactor): externalize and improve the source package (refactor): externalize and improve the source package Jun 20, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 20, 2023
@everettraven
Copy link
Contributor Author

/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 20, 2023
Comment on lines +69 to +71
type unpacker struct {
sources map[SourceType]Unpacker
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT about changing this to:

type unpacker struct {
	image *Image
	git *Git
	http *HTTP
	upload *Upload
	configMaps *ConfigMaps
}

And then implement a Builder pattern for the constructor, such that a caller would do:

u, err := source.UnpackerBuilder()
	.WithImage(...)
	.WithGit(...)
	.WithHTTP(...)
	.WithUpload(...)
	.WithConfigMaps(...)
	.Build()

And Build() could check that all of the underlying source implementations were non-nil (or return an error). I think this would make the unpacker setup a bit more discoverable, and it would help guarantee that all users of the composite unpacker had implementations for all of the known source types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not opposed to this, but I have a couple questions:

  • How would someone add a custom source to the unpacker?
  • How could someone elect to not implement one of the known source types? (this one in particular is important for catalogd - in my PoC we don't currently use the Upload source since it depends on having the rukpak upload service running on cluster. IMO this unpacker should easily extensible rather than locked into using specific source types)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: custom type: if/when we have that use case, we could extend the struct and builder to accept arbitrary type names and implementations.

Re: require implementations of all types. My thought here was mainly to make sure that when we add a new core type in the rukpak bundle api, all rukpak provisioners that bump to the new version of rukpak that adds the type are given a signal to pick it up. On main right now, we get a compiler error when we change the signature of NewDefaultUnpacker to accommodate a new type, which is really nice (other than the unruly growth of the signature parameters).

There are other ways to do this though. For example, we could build a conformance test that includes expectations that for each supported type. Its just that the feedback loop on that kind of thing is so much longer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: upload source, maybe we should position the upload service alongside the upload source implementation to make the upload source portable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: custom type: if/when we have that use case, we could extend the struct and builder to accept arbitrary type names and implementations.

Fair enough. In my head I had it that there was already a use case for it, but now that you pointed it out I can't think of one...

Re: require implementations of all types. My thought here was mainly to make sure that when we add a new core type in the rukpak bundle api, all rukpak provisioners that bump to the new version of rukpak that adds the type are given a signal to pick it up. On main right now, we get a compiler error when we change the signature of NewDefaultUnpacker to accommodate a new type, which is really nice (other than the unruly growth of the signature parameters).

This is definitely a nice to have. I'll have to circle back around to this and think about how we can do this. I'm still hesitant to require that all usages implement every source type, but maybe we can instead of erroring on Build we return an error when that source type is attempted to be used that states something like ABC doesn't support the XYZ source?

I think in this scenario you would still get the ideal scenario of requiring to add an instantiation when creating an Unpacker but you could explicitly set it to nil if you didn't want to support said source type.

Re: upload source, maybe we should position the upload service alongside the upload source implementation to make the upload source portable?

I think that is reasonable, but I still think there are use cases where someone doesn't want to implement all the source types due to potential overhead. Probably a conversation to be had for catalogd if the additional service overhead is worth the value provided to users. On top of that, how does a user consume it? Would rukpakctl work for uploading something other than a bundle or would catalogd have to implement its own uploader CLI?

// specifications. A source should treat a source root directory as an opaque
// file tree and delegate source format concerns to source parsers.
type Unpacker interface {
Unpack(context.Context, *Source, client.Object) (*Result, error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 moving this from Bundle to client.Object, but it'd be even better if this interface didn't depend on having a kubernetes object. It seems like the primary reason for this needing to exist is to make sure the image unpacker is able to add an owner reference to the pod that it creates. But it looks like the other source type implementations either don't use it, or use it for what amounts to a unique identifier.

I wonder if there's something we could do to remove this from the interface while also still giving the image unpacker enough information to create a valid owner reference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the solution is adding an "optional" field to the Source type that is of client.Object (or maybe even better metav1.Object?)? I say "optional" because some sourcing implementations may require it while others may not.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2023
@openshift-merge-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@github-actions
Copy link

This PR has become stale because it has been open for 30 days with no activity. Please update this PR or remove the lifecycle/stale label before it is automatically closed in 30 days. Adding the lifecycle/frozen label will cause this PR to ignore lifecycle events.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2023
@everettraven
Copy link
Contributor Author

/lifecycle frozen

@openshift-ci
Copy link

openshift-ci bot commented Jul 31, 2023

@everettraven: The lifecycle/frozen label cannot be applied to Pull Requests.

In response to this:

/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@everettraven everettraven added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

externalize the source/unpacker implementation
3 participants