Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker add TEST command #16993

Closed
jakirkham opened this issue Oct 13, 2015 · 17 comments
Closed

Docker add TEST command #16993

jakirkham opened this issue Oct 13, 2015 · 17 comments

Comments

@jakirkham
Copy link

Frequently, I want to test a container I built before I push it or if it is being built and pushed automatically I want to be able to fail the build if it doesn't hold up to testing.

Normally, this can be done with RUN, which works ok except for the following issues. It may commit some test dependencies or other irrelevant products of testing. Now, I can try to eliminate these before the end of the RUN command, which I do. However, inevitably some are missed.

In the best case, these test artifacts are small and have no effect (e.g. logs). In the worst case, they are some residual part of the testing apparatus that are left in some sort of broken state only to be discovered when someone uses the container.

However, if the test layer is merely run and never committed at all, one gets the best of both worlds. I have verified that the previous layer works as intended and was not tampered with in any way since it was verified. Further, I can verify during the build process that everything works as intended without accidentally distributing an untested image.

@GordonTheTurtle
Copy link

Hi!

Please read this important information about creating issues.

If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

This is an automated, informational response.

Thank you.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment:

docker version:
docker info:
uname -a:

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:
1.
2.
3.

Describe the results you received:

Describe the results you expected:

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

@duglin
Copy link
Contributor

duglin commented Oct 13, 2015

I think the best way to solve this would be to do a docker build on a Dockerfile that uses the output of your build as the FROM image. Then you can do whatever testing you want on this 'throw-away' image w/o fear of it corrupting your true build image. Also provides a much nicer separation of concerns.

@thaJeztah
Copy link
Member

I agree with @duglin here, sounds like a cleaner approach for this.

@jakirkham
Copy link
Author

Clean or not, unfortunately, that does not resolve the problem I or others are having.

Namely, building the image without tests means the image can end up being deployed without being properly tested. In particular, consider the case of automated builds on Docker Hub or any other strategy for automatic builds.

Here is an article that cites this exact problem ( http://blog.wercker.com/2015/07/28/Dockerfiles-considered-harmful.html ) while I don't agree with all of their assertions. Insufficient testing is a serious issue IMHO. I am quoting the relevant section of the article below as I largely do not care or disagree with the rest.

Do you really want your test dependencies in your container? If you don’t want those layers floating around, you better work on your bash-fu, and make sure you can install, run, and delete your tests in one terrible-to-read line of shell script. Same goes for your build dependencies.

Perhaps what you want do is have a separate Dockerfile for testing that imports the container you made with the first Dockerfile (which probably has your build dependencies in it), and then run the tests there and… what? Mark the first container you built as having failed tests? Delete it?

A significant number of people push Dockerfile changes, let some poor registry build it, and then import that result to run their secondary testing Dockerfile. Now they’ve just shipped before they’ve tested.

@cpuguy83
Copy link
Member

But having a TEST command does not solve this problem.
Not testing your images is a process problem.
Images can be tested by building a 2nd image with whatever testing tools you need.

You can even make use of the feature coming in 1.9.0 which lets you inject arguments into the build to specify "this is my test run" and have it install different stuff based on that.... something like docker built --build-arg BUILD_ENV=test, which shows up as an env var inRUN` commands (and can interpolated into other commands as well).

@duglin
Copy link
Contributor

duglin commented Oct 14, 2015

@jakirkham perhaps there's a misunderstanding. I wasn't suggesting that Dockerfiles are the best way to test your docker builds. Rather, it sounded like you wanted to use Dockerfiles for that purpose and I was suggesting a possible solution. If, as you and that article suggests, Dockerfiles aren't good for testing images then its probably because they're weren't designed for that. They were designed for building containers, not testing them. If someone can do some testing with it, more power to them, but I don't think its fair to claim they're harmful.

To me this is similar to having a Makefile like:

exe:
    cc -o exe a.c

test:
    ...do some testing of exe...

and then claiming that make is broken because you can build the exe w/o testing it. Well, as @cpuguy83 said, this isn't an issue with make or your Makefile, its an issue with your process. You shouldn't deploy anything w/o testing it - how you test it is up to you. Docker is just providing some building blocks - its up to you to stack them in the desired order.

@jakirkham
Copy link
Author

But having a TEST command does not solve this problem....

Unfortunately, I have to disagree with you, @cpuguy83. If you can think of a better way, then I'm all ears. My interest here is stopping the image from being tagged and released in some standard and uniform way from using local docker build to automated builds on Docker Hub.

Images can be tested by building a 2nd image with whatever testing tools you need.

I believe this is was what @duglin was suggesting. As nice as this idea is it does address the problem here is the image has already been tagged and in the case of automated build system like Docker Hub this has already been released to the wild untested. Resolving the problem before this tagging is desirable.

You can even make use of the feature coming in 1.9.0 which lets you inject arguments into the build

This is a very useful feature and I am glad to see it. In fact, I will probably use it for a few things. However, I think it is orthogonal to this particular issue.

I don't think its fair to claim they're harmful.

This is what the authors of that article state. I did not write the article. I agree with you and am generally quite happy with docker and Dockerfiles. I merely think testing is insufficiently addressed in the current framework and when it comes to that point I think they are absolutely correct. I have quoted the relevant section from the article I do agree with in my old comment ( #16993 (comment) ).

I wasn't suggesting that Dockerfiles are the best way to test your docker builds.

Nor was I trying to imply you did.

They were designed for building containers, not testing them.

That's the problem in my mind and why the issue was opened.

You shouldn't deploy anything w/o testing it - how you test it is up to you.

Unfortunately, it is not. Automated builds on Docker Hub or locally ones with docker build will tag the image if the build doesn't fail. With Docker Hub, this means it is already deployed. Testing this afterwards is now a problem anyway it is done.

To me this is similar to having a Makefile...

Sorry, I think this example is not accurate for the problem being discussed here. If you really want to know why I think this, I am happy to discuss it, but will defer as this has already gotten too long.

@duglin
Copy link
Contributor

duglin commented Oct 14, 2015

Have you considered changing the process you use with DockerHub? I personally don't use it so I can't say whether this even makes sense, but I wonder if you could do this:
1 - use DockerHub as you do today to build an image, but don't tag it or deploy it - just build it.
2 - use DockerHub's linking mechanism to kick off a 2nd build after the image has been built
3 - in the 2nd build, test the image and if successful issue the appropriate Docker cmd to tag, deploy, or even upload under a new name, the first build's image.

@cpuguy83
Copy link
Member

@jakirkham If images aren't being tested, this is a process problem, adding new functionality doesn't really solve this as it still requires people to take an action.

That said, I do think it's a cool idea.

@jakirkham
Copy link
Author

Have you considered changing the process you use with DockerHub? I personally don't use it so I can't say whether this even makes sense, but I wonder if you could do this...

You are clearly a very clever dev, @duglin. Though you must admit this does sound a bit complex, no? I'd be a bit concerned about this breaking.

That being said, Docker Hub does have webhooks, but I don't believe it has a way of issuing other commands. If I am wrong about this, feel free to point it out. I'm always interested in learning something new.

If images aren't being tested, this is a process problem, adding new functionality doesn't really solve this as it still requires people to take an action.

Ah, sorry, I think I missed that this was your, @cpuguy83, point. Currently, I do test by using the RUN command. It works ok, but it is a little worrisome and I think this sort of change could help.

That said, I do think it's a cool idea.

Cool, I think it may open some options to test in-between layers, which could be nice. This would shorten the build time if the build was already problematic at an earlier stage and allow for one to inspect closer to the problem.

@duglin
Copy link
Contributor

duglin commented Oct 14, 2015

At https://docs.docker.com/docker-hub/builds/ under the "Webhook chains" section it talks about having one build trigger other builds by sending POSTs. Then under "Remote Build triggers" it talks about sending a POST to a URL to force a build. I'm guessing you can send the webhooks POST to the remote build trigger URL. So, hopefully, you can force the build of your original image to kick off a "test" build that then tags/deploys/etc... if successful.

a bit complex? yup :-) but it might work and once setup it might not be that bad.

Adding a TEST command is, as @cpuguy83 said, is a cool idea, but I would prefer if we looked at it differently. Lots of people have asked for "nested builds" and I think your use case would fit nicely under that because they, I believe, want to have a Dockerfile build one image and then use that image in a secondary build.

Something like:

FROM scratch
...build image...
TAG myProduct-test

BUILD
FROM myProduct-test
... test...
ENDBUILD

TAG myProduct
UNTAG myProduct-test

where if anything under BUILD fails then the outer build stops/fails and never gets to the final TAG myProduct. All speculation since we don't have lots of these features yet. See: https://github.com/docker/docker/blob/master/ROADMAP.md#22-dockerfile-syntax

@jakirkham
Copy link
Author

At https://docs.docker.com/docker-hub/builds/ under the "Webhook chains" section it talks about having one build trigger other builds by sending POSTs. Then under "Remote Build triggers" it talks about sending a POST to a URL to force a build. I'm guessing you can send the webhooks POST to the remote build trigger URL. So, hopefully, you can force the build of your original image to kick off a "test" build that then tags/deploys/etc... if successful.

There are webhooks. They can trigger builds. This use case (of one Docker Hub build completion triggering another Docker Hub build) is common enough that you can actually just link Docker Hub builds. However, the triggering of specific tags I don't believe is implemented. ( docker/hub-feedback#363 ) That being said, maybe one could have a linear chain of three repos where the third simply has a FROM statement to pull in the first one.

Lots of people have asked for "nested builds" and I think your use case would fit nicely under that because they, I believe, want to have a Dockerfile build one image and then use that image in a secondary build.

Hmm, interesting. Maybe a bit overpowered for this case, but it looks like it could work. Leaving an explicit tag is a nice touch. Not sure how that would be handled somewhere like Docker Hub. Would this depend on dind? If so, I think that would be a showstopper building on anything other than one's local machine. That being said, I can image ways to do this with docker build alone. I'd have to think about this potential solution a bit more, but I suppose there is time.

Do you have links for some of the other use cases? I'd be curious what they are?

All speculation since we don't have lots of these features yet. See: https://github.com/docker/docker/blob/master/ROADMAP.md#22-dockerfile-syntax

Right. I am aware of this. I wanted to raise this point while I was thinking about it even if it will be blocked at present.

@duglin
Copy link
Contributor

duglin commented Oct 14, 2015

See #7115 for one use case/issue.

I wasn't thinking of nested builds using dind as much as it would be more like invoking a sibling docker build command on the same Docker host as the first build.

@jakirkham
Copy link
Author

Thanks @duglin.

@duglin
Copy link
Contributor

duglin commented Oct 14, 2015

going to close this since I don't think there's anything actionable at this time.

@duglin duglin closed this as completed Oct 14, 2015
@jakirkham
Copy link
Author

Sounds good. Thanks again to everyone for there thoughts on this issue.

@jakirkham
Copy link
Author

So, I have given this more thought and really feel that what I proposed here is the simplest solution. I feel the syntax proposed in ( #7115 ) is a bit too complex to follow in general and adds unnecessary complexity to the use case described here. It would be nice to revisit this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants