New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Production-ready process monitoring #1311
Comments
I would like to find an acceptable solution for the 0.6 release. We don't need to implement all 3. But we need to agree on at least one, and make it work well out-of-the-box. |
#507 is going to be taken care of by #1249. @shykes Is there anything else needed for #1249? #1249 is going to be the first version of the signal handling passing system. As Guillaume has said, it'd be a good idea to do the same thing for any signal, but that also requires some changes on the API side to allow us to pass an arbitrary signal. SIGINT/SIGTERM should be enough for now, though. #503 (standalone mode) would be yet another thing to test, so I'd suggest delaying that one until after people see what docker run with features from dockrun can do for them. |
Option #1 sounds like it is better handled by existing tools (e.g. supervisord) so that's my least favorite option. Option #2 is where I lean, but I'm wondering if there are things that will be hard/impossible to do in terms of creating a robust remote proxy for a container. @unclejack do you see any issues towards the eventual goal of having the It seems Option #3 breaks the Docker client/server model. |
@brynary I think it's doable, The REST API will be extended to send any signal directly to the process running in the container, so we won't be restricted only to SIGTERM/SIGINT in the future. I'll update |
Option #3 does break the client/server model, however it also adds not only signal behavior, but all other aspects of processes as well - Resource monitoring for example. I like my process hierarchy. :) Separating the concerns of management from actual execution seems more natural IMHO. The executor/"standalone run" could probably register with the docker daemon to still allow management through the remote api, etc, thus preserving the current c/s functionality. |
Ah yeah, those benefits make sense. I like the idea of standalone still being registered in the Docker daemon if that doesn't cause other issues. -Bryan Bryan Helmkamp, Founder, Code Climate Sent from my phone. Please forgive brevity and typos^H^H^H^H^Hautocorrect failures. On Thu, Aug 1, 2013 at 10:20 AM, Daniel Lundin notifications@github.com
|
Would option number 3 mean that the current functionality of spinning up containers and having them be managed by the daemon go away? If so, I would certainly miss it. Also, if the standalone run just registers itself with the daemon, does that mean I can control it via the daemon? If I can't control it, then there's not much point in registering. There are two use cases that I can see:
The former would be good for running standalone services or running coordinated services in an environment where you already have a monitoring solution in place. I can see several places that I can use it in this respect. The latter is convenient for small to medium PaaS implementations that don't necessarily want to have their own top level monitoring but rather democratize that into the containers themselves. Perhaps we could leave |
Well, then how much management does Docker do? I've been looking into using Docker in production for some small projects and my main concern now is what happens when the container crashes. Does it restart? Does it notify me by email? There are plenty of process management tools that handle these questions already. I do see the value of having dockerd spinning up containers as well. Many containers don't require process management (e.g.
+1. Is it possible for |
I'm starting to solidify my thinking on making The So, relating this back to the original options described by @shykes:
Just my 2 cents. |
Yeah, I really like having both
It is very useful. The use case that this supports is "I want to manage my containers via the docker daemon and use something other than process level monitoring". Using For instance, if I have a PaaS controller node and it talks to 10 remote docker daemons to spin up application containers, I might opt for application level health checks over process level ones (i.e. it doesn't matter if the process is up and running when it's not responding to web requests). In this case, if an application isn't responding, the controller should start the application on another node and stop it on the node that it was running on. Of course the docker daemon shouldn't be responsible for that level of orchestration, but being able to completely manage containers via the REST API is vital to it being possible. |
I agree that option 1 posted by @shykes should be avoided, as that sort of functionality is present in many existing applications. I think my ideal use case would be to be able to run a "docker exec" under a supervisord process on the host node. Not having that functionality, and not being able to easily monitor process trees for usage was the biggest reason why I went with another solution over docker. |
Big +1 for From what I understand option 2 can achieve the same functionality as option 3, allowing for daemon managers to use We'd then benefit from a much shorter call stack, automatic ephemeral behavior (i.e. no container RW layers), little/no daemon interaction or HTTP API calls, natural UNIX process semantics and signal handling, no logging other than to stdout/stderr, etc. |
+1 for |
Has there been any activity on this? |
I believe that #2007 implements it! :-) |
It implements only option 2, but what about option 3 (docker exec) ? Any news on that? |
I believe that See: e0b59ab |
This didn't make into 0.7 I assume? Is there a timeline for when this is expected? |
Would be really nice to get this working. I'm on 0.7.6, using the
Only SIGKILL gets recognised, and even then, only the docker run process is killed, not the container. |
Confirmed, on 0.7.6 I can't terminate container with term signal. |
+1 Not working on 0.8.0 either. |
My preference would be Option #3, but in the meantime I implemented a bash script that wraps |
Cleaning this one up since the restart policies and docker client signal proxying handle this. |
This should stay open because we don't have single process monitoring (e.g. starting and running a container without the docker cli/daemonless run). |
For docker to be fully usable in production, we need a robust and standard way to do the following:
This could be implemented in different ways:
The text was updated successfully, but these errors were encountered: