-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for nested build steps #8021
Conversation
This would need some more documentation in I've added your man page excerpt at the top of your description. |
copy := *b | ||
builder := © | ||
|
||
_, err = builder.Run(archive) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you consider create a new Job("build")
instead?
Not sure it would be more appropriate, but it would make this tracked by the engine.
/cc @vieux
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I did.
The problem is that if we just remember the current job and clone it then we would break usage of this package just by its public api. It would have to be changed so that you can only build from a Job object.
If you mean making a new Job from the properties of the current Builder then this would mean maintaining a double logic of how a Job is converted to a Builder and how a Builder is converted to a Job. I think this part changes quite often with new features and if its not perfectly in sync then bad stuff can easily happen.
Up for discussion of course. Maybe I'm missing some important value that having a separate Job adds. Shutdown and logging should work the same AFAIK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just change the current builder's context ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@plietar The current image Id is stored as a property of the Builder. If there is no clone then inner build would overwrite this Id for the outer build.
c4923d6
to
ee54c51
Compare
I rebased and fixed 2 issues:
|
ee54c51
to
871f145
Compare
Rebased. IRC meeting log: https://botbot.me/freenode/docker-dev/2014-10-02/?msg=22753567&page=5 I guess the first step would be to clear the design issues. Mainly it seems to be concerning the parent image tagging. I'm not against Current implementation adds the suffix to whatever tag-like string user entered. I left it so specifically so that one could use it to generate suffixes for both tags or version, whichever the user liked more. This is probably even more controversial than hyphens. From IRC it seems that most people expect it to be added only to the image name part. For me personally I don't see any other value in the builder images than to just keep the cache around. I probably wouldn't ever want to publish a builder image. Also, this part was not in the original proposal. But when I run |
I suspect in a 3 phase i might want to use the builder images - but perhaps thats something that could be optional - something like |
I assume, since your examples work, that I must be doing something wrong when trying to use your nested-builds-0 branch: https://github.com/colemickens/polykube/blob/ca13c034a2697ca2a3065e937219fdb2adc02271/Dockerfile-goapi
|
@colemickens That is a bug. I'll try to fix it later today. Quick workaround is to use a CMD before the BUILD keyword. The command itself doesn't matter as its not really executed, something like Also note that if you want to base your images from edit: fixed |
871f145
to
5fa8c08
Compare
I can confirm it's fixed! Thank you for the links as well. I knew to disable CGO (though I'd forgot) but I didn't know the extensiveness of the rest... |
Is there any ETA for this function or similar to be integrated into Docker? Is it "production" usable (with the usual beta hiccups) and could be used as long as it's going to be included later on anyway? |
No quite true: “No previously set instruction has any effect to the new build” Given:
Assume the above successfully builds image. Now change the file named “Hello” to “HelloThere” in the NewContext:
Building this image will fail. There is no indirect binding element in this design that would improve on its encapsulation and prevent coding changes applied to a prior build step from affecting subsequent ones. Absent indirect binding, the Dockerfile build context references within the resultant image of this example couple directly to the build context “/NewContext”. Therefore, alterations to the build context introduced by “previous instructions” can certainly affect the implementation of the “new build” (subsequent build steps). The second example involving multiple build steps demonstrates some of the issues documented here such as:
Additionally, this solution diminishes the declarative nature of a nontrivial Dockerfile as a Developer:
|
This is a docs issue. I should find a better wording to avoid confusion. The reason for having a builder image is to prepare a new context for the nested build, so of course everything in this directory by that time is available(as also stated by the docs). This sentence was meant to indicate that this is a completely new build and any ENV, WORKDIR, USER, CMD etc commands do not apply any more. edit: To clarify more, your sample does not fail because you ran Regarding everything else, I'd appreciate leaving this discussion to your own proposal thread or to the proposal thread for this feature, and leaving this PR for discussions about the current implementation. |
Yes – I agree the build fails because the file “Hello” is missing but the example also demonstrates the tight coupling between build steps that suggest the design is weakly encapsulated. It’s this weak encapsulation that most concerns me. |
I do not follow.
Absolutely. I'm not trying to silence your opinion in any way. Regarding my own examples, the first one is the use case for this PR. Second and third are just there to explain things that may not be so obvious. My third example is complete nonsense, without any practical value, just to show what would happen in this case. You can make the same image today, with a 2 line Dockerfile. |
Any progress so far? I'm eagerly looking forward to this. |
@stp-ip I wouldn't be too optimistic. While this is waiting for design decisions from @shykes, @erikh and @tiborvass are working on Dockerfile2 that has some overlapping https://gist.github.com/erikh/2447342612fbbd938b8c. Probably weeks until there are decisions if we're moving on with this PR or abandoning in favor of something else. edit: btw, if anyone needs a rebase, just let me know |
@tonistiigi That's unfortunate as this would effect decissions concerning secret distribution as discussed in kubernetes/kubernetes#2030, which could be interesting in docker itself. Any ETA for Dockerfile2 decissions itself? Or which version it could hopefully land? |
@tonistiigi I noticed a "secret" badge when visiting that gist; Is it something to be shared publicly, or did you "spill the beans"? |
@thaJeztah no, its ok. It was shared publicly in irc. |
@tonistiigi okay then, just confirming, thnx 😄 |
Hang on. :) Nobody has made any decisions yet.
|
+1 just wanted to say I like this proposal :) It seems it could be very composable :) |
BUILD instruction starts a new empty builder process. Context for the new build is set to one of directories in current image. Signed-off-by: Tõnis Tiigi <tonistiigi@gmail.com> (github: tonistiigi)
5fa8c08
to
aa74bbd
Compare
Not giving up on this yet. Here's another rebase. If anyone wants to play with this you can get a x64 binary from https://www.dropbox.com/s/smj2b9tvl63f1qw/docker-1.4.0-nested-builds?dl=0 that has this patch on top of |
Review session with @unclejack @tiborvass @crosbymichael Thank you for your contribution, however we decided to close this because no proposal has been decided on. We need an agreement on a design from @shykes (probably with @proppy's participation), and that debate can continue on the already opened proposals. |
Have to say I'm a bit disappointed about this. The reason for this implementation was to force the maintainers to make a decision instead of letting the proposals die out. Although many people need something like this, there hasn't been any meaningful response to @proppy's proposal since July. Same goes to the @shykes original proposal or the proposal by @WhisperingChaos on the same topic. Its fine for me to close it because the proposal sucks or the implementation is crappy. But I can only read the current reasoning as "we don't have time to deal with this". |
I agree. There are a few proposals, but none seem to be worked with. This one included. |
I'm also quite disappointed in this. Outside of this patch, and cheating with tar and pipes, it is rather difficult to achieve minimal containers in some instances. Closing functional proposals with working code seems more than a tad bit rude. What "already opened proposals" are allowed to have discussion, and why isn't this one of them? Especially since tonisttiigi's maintaining this patch, it's easily testable... |
@colemickens I would not call closing functional PRs rude as there are only two possible outcomes with a PR. It's either merged or it's closed. This is part of open source. Decisions have to be made and if we merged every single PR into docker it would certainly become unusable for most of us. @tonistiigi Sorry you feel this way, it was not our intention. The reason why this was closed is because it is based on a proposal that has not yet been finalized, nothing more. As far as the responsiveness on the proposals for this, it is true that there have not been any meaningful comments for a while now. We have a huge amount of volume on PRs, issues, and requests and it is true that we don't have time to review and finalize everything. All of the maintainers review almost every single comment on the repository and if you have ever watched this repo you know the amount of volume that we are working with. Even if the maintainers don't immediately respond, we do read them, all of them. We just have to make some decisions on priorities and make sure that we are able to address more pressing issues over others. Bugs and security issues usually take priority over new feature requests. Part of the work being done by the maintainers right now is to be more responsive. We are always looking for more people to help in the maintainer roles as there are more ways to contribute to an open source project than writing code ( even though this is the fun part ). Anyways, sorry again if you felt that we closed this for the wrong reasons. Feel free to ping me personally on issues, PRs, and proposals. I try really hard to respond quickly when pinged and reaching out to the other maintainers on IRC or pings on issues helps us see where people are having issues with the project and what we should focus on. |
@tonistiigi I could update #7149 to match your implementation, would you be ok with this? @crosbymichael @icecrime do you see this as a possible path forward? |
@proppy Yes, fine by me if you think that helps to move this process forward. |
Sorry @tonistiigi for the excessive delay in my reply. However, I needed time to reconstruct and extend the comparison between Nested/Chained Build and Function Idiom proposals that was originally presented in this thread after you correctly requested that it be removed and associated to the Function Idiom one.
Please review the comparison between Nested Build and Function Idiom that now exists as part of the Function Idiom thread #8660. You'll find a table at the end of the fourth reply that should better present the 'tight coupling' argument.
Simple examples eliminate the implication that an approach's weakness is somehow related to the complexity of the example. Instead, encountering an issue in simple examples should cause reassessment, as the problem is likely innate to the approach. The 'nonsense' example presents the bare minimum operations of a given Nested Build: a single ADD, a single core statement, in this case RUN, and a potentially subsequent BUILD. It also includes a small number of adjacent build steps. If weak encapsulation/tight coupling can be demonstrated by this example then it's probably innate to Nested Build. |
This is a nested build implementation roughly based on #7149 by @proppy
BUILD /path/to/build/context
The
BUILD
instruction starts building a new image as ifdocker build
wouldhave been run in the specified directory of the current image. Rest of the
instructions in the Dockerfile are now considered to be part of this new build
process. If there are no instructions and the context directory contains a
file called Dockerfile then this file is used instead. Files located in build
context directory are available as source paths to the ADD and COPY commands.
No previously set instruction has any effect to the new build.
Consider a Dockerfile:
Running
docker build -t tonistiigi/dnsdock .
will result in the creation of following images:The final image is minimal, containing only a static binary compiled by previous image. Builder layers are kept to keep build cache working.
Parent build step can be also used for dynamic Dockerfile creation. In this case
BUILD
instruction needs to be the final instruction in the Dockerfile.Number of build steps isn't limited. Last image gets the tag.
I realize that the proposals with different solutions for fixing this problem have not yet been agreed on by the maintainers. Just hoping that having one implementation ready for testing/review brings the final solution for minimal containers closer for everybody.