-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating the deployment example to use volumes, not configure #10
Conversation
Yay |
👍 |
Running jet steps in the 8.deployment-container folder fails on my local machine. A general questions I'm not sure how this specific example would show creating a deployable container as the container doesn't actually contain the build artifact. Its stored in a volume and when I push the container to a registry would the volume be pushed with it? From my understanding the volume would only be available on the build machine thus once we push the container somewhere else its basically empty (and creates a new volume there). To make sure we have some kind of general test infrastructure for this repo I set up services and steps file and added it to our org on Codeship. Build running here: https://codeship.com/projects/103513/builds/f090ac3c-3f2f-498c-9b61-0c46b56715d6 @bfosberry I can send you the log output as well |
Ran with the latest version again and now it worked fine on my machine. But there is still the question if this actually creates a container with the artifact? |
Volume won't be stored with the container. Idea is you: Build artifact in container with volume mounted as /artifacts Build production container with volume mounted as /artifacts. During the RUN cp /artifacts/app /app So /app is not a volume, thus it is part of the container. It's like using a USB key between computers.
|
@ngauthier yup thats what I thought too, but the Dockerfile is empty so the |
Cool. We should probably add it. Maybe in the example also run something On Sat, Sep 19, 2015 at 10:08 AM, Florian Motlik notifications@github.com
|
OK so just to reiterate on what should be happening here according to my understanding (but isn't already): We start an instance of compiledemo which has the tmp folder of the repository mounted as a volume. When we write to that tmp folder its written into the source repository folder on the host so its available for the next Docker deploy build (which is not as a volume, but simply because its in the folder) Then in the deploy build we're copying the date file out of tmp/date into the container with The deploy container then has the artefact and is ready to be pushed to a registry with all files it needs to run in production (which here is only the date file) Therefore the deploy container doesn't need to share the same data volume or use volumes_from as its not reading anything from a volume, but it already copied the date file as part of the Docker build. So the volumes_from should be removed in codeship-services.yml and the cat call in the steps.yml shouldn't reference the file in the volume, but the path that we copied the file into According to moby/moby#14080 its not possible to use volumes during a Docker build and from my local trial the volume is also not as described above linking to the host source repository, but a named volume on the Host. Thus the file is simply not there during the Docker build and the Docker build fails. @bfosberry could you describe the workflow you have in mind for this example without configure so we can make sure we think about the same thing. Because it seems to me at the moment this is not equivalent to what configure does. |
So this was intended to be an example of providing an artifact to a running container. I'll extend this to add an example of building said artifact into a container to allow containers to be used as build artifacts. There is an open question around how this should be handled. If we allow users to mount directories on the host we risk subsequent build affecting each other locally (and maybe on the hosted platform eventually). A solution to this would be to use a capistrano style subdirectory layout (tmp/builds/BUILD_ID, tmp/builds/current) which would mean we would have to enforce mounted volumes were under a certain dir, and docker builds would be able to consistently pull from tmp/builds/current. The problem with this approach is that it becomes difficult to maintain consistent artifact interaction for the user between docker builds and volume mounting. Saving a file to a volume may involve just setting A.txt, while adding it to a docker build may mean COPY tmp/builds/current/A.txt. Rather than this approach I think it's reasonable to expect the user to manage their artifacts. All we should do is ensure that the mounted host volume is under the checkout folder, and document the fact that the user should be aware of possible old build artifacts and clean the folder before use as needed. |
Keep in mind in the future we plan to run builds on a swarm, and there may
|
With that in mind I'm leaning towards a flocker-powered static build artifact volume linked to jetter, that we can attach to build containers as needed, and is available during the docker build since that is attached to jetter. This would look something like this
|
Cool. Now for local execution, would we just mount straight to the host?
|
So for now we can simply mount to the folder within jetter, in the future we'll want to split it out to support flocker and swarm but that depends on the implementation of flocker. |
@bfosberry what are the next steps that need to happen to get this set up? What needs to be changed in jet to support this directly? Running with host source repository connected and flocker in the future (if that can be properly connected into a build container) sounds good to me. |
|
Is that possible at the moment without any changes to jet? Can we use the source repository on the Host as a volume right now? On 21 Sep 2015 at 21:44:24, Brendan Fosberry (notifications@github.com) wrote:
— |
Currently on jet you can mount host volumes, but that gives the user access to the entire host so its a security risk we'll want to patch in the future. The problem is the user doesnt know what dir we'll checkout into, so it's hard for them to specify the safe directory. For now we can let them use whatever, but in the future we'll want to prepend the project dir by default. I also had a great convo with nick about possibly allowing binary injections though Commit, rather than dockerfile. We'll probably be exploring this in the future as an alternative to host volumes |
Also my other idea, which I like the best right now, is to use add_docker
|
Agreed prepending the source repository definitely makes sense, so people can't break out of that specific repo. Would this be a massive change in the current implementation?
Imho its not really an advanced feature as this is how we want people to build clean Docker containers. Its definitely a best practice to separate building the artifacts from the actual deployable container (and has come up numerous times during the Demos) so this should be something built very much into the platform to be very easy. Having to do add_docker, commiting and then pushing through Docker is too complex for a best practice that we want people to follow in my opinion. @AlexTI thoughts? |
Build fails because there is no global steps file :P |
Looks good. I like that volumes must be local. Seems safe. |
@bfosberry is that working on your local system? Not working on mine (which might be due to docker-machine not being able to create locally shared folders even with the virtualbox machines) |
Updating the deployment example to use volumes, not configure
No description provided.