Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating networks via docker #150

Closed
squaremo opened this issue May 13, 2015 · 26 comments
Closed

Creating networks via docker #150

squaremo opened this issue May 13, 2015 · 26 comments

Comments

@squaremo
Copy link
Contributor

There's not much point in having the whole driver subsystem unless one can actually create a network. I don't see that as part of moby/moby#13060.

If it is a requirement to not add new commands to docker (e.g.,docker network create), then perhaps it can be part of the syntax of the --net argument. For example,

docker run --net=weave:mynet

where weave refers to the driver, and mynet is a network created with that driver (if it doesn't already exist).

@mrjana
Copy link
Contributor

mrjana commented May 13, 2015

@squaremo moby/moby#13060 is only the first of a set of PR which will made in there. So the short answer is it's coming. The idea is to implement the majority of the CLI handles and remote API handlers in libnetwork itself and just hook it up to docker core with a small function. Introducing a new UI or remote API in docker needs more discussion and that's why the goal of moby/moby#13060 has been limited to modularize networking code out of docker and providing a clean interface to networking code for docker or anybody else to use.

BTW, for the initial implementation of CLI handlers take a look at libnetwork/client code

@mavenugo
Copy link
Contributor

Also, we are currently integrating the libnetwork/client with libnetwork/api as we speak and we will have a dnet tool that makes use of these. As @mrjana suggested, the docker CLI integration will follow soon after moby/moby#13060 is merged.

@squaremo
Copy link
Contributor Author

Another requirement: the ability to add more than one network in the same invocation is quite important. Typically one interface will provide internet-in-general access, and one will be the "cluster" network (e.g., weave).

@dave-tucker
Copy link
Contributor

@squaremo so assuming weave1 is a network created using the weave driver, you'd want --net=bridge --net=weave1?

@squaremo
Copy link
Contributor Author

so assuming weave1 is a network created using the weave driver, you'd want --net=bridge --net=weave1?

Yes exactly. One wrinkle is that typically we'd want the bridge driver to set the default gateway, but the weave driver to provide (or at least influence) the /etc/resolv.conf.

@dave-tucker
Copy link
Contributor

Thanks @squaremo, @mavenugo this seems like a sensible requirement. Are multiple occurrences of the --net flag going to be supported in the integration PR mentioned above? If not, can we add them?

@mavenugo
Copy link
Contributor

@dave-tucker @squaremo no. I will be pushing mavenugo/docker@c0c7f37 shortly. This is based on the discussions that i followed for volumes, the idea is to introduce --network-driver instead of overloading the --net string. We need to find a way with these constraints.

@squaremo
Copy link
Contributor Author

@mavenugo Does --network-driver coexist with --net; i.e., could one use

docker run --net=bridge --network-driver=weave -ti ubuntu

and expect a bridge interface as well as a weave interface?

@dave-tucker
Copy link
Contributor

@mavenugo right, I'm not talking about overloading of --net here, I'm talking about supporting multiple occurrences of --net assuming that a network has already been created using the docker network create CLI

@shettyg
Copy link

shettyg commented May 21, 2015

If there are going to be multiple --net's, is there an idea on how to pass network labels per --net?

@mavenugo
Copy link
Contributor

@dave-tucker @squaremo @shettyg All are valid and reasonable questions. There are a few trade-offs to be made between simple, consistent UI & functionality. I will get back to you all later today.

@dave-tucker
Copy link
Contributor

It hasn't been discussed here yet, but my expectation is that labels are namespaced per the docs

docker run -it --net=bridge --net=weave1 -net=vmware1 -l works.weave.foo=bar -l com.vmware.baz=quux debian:jessie

A driver should get passed all labels, but only respond to those in it's namespace

@squaremo
Copy link
Contributor Author

It hasn't been discussed here yet, but my expectation is that labels are namespaced per the docs

docker run -it --net=bridge --net=weave1 -net=vmware1 -l works.weave.foo=bar -l com.vmware.baz=quux debian:jessie

A driver should get passed all labels, but only respond to those in it's namespace

Ah, but the label namespaces correspond to the driver name, rather than the network name. So if a container has two endpoints provided by the same driver, how will that driver know which labels apply to which endpoint?

@dave-tucker
Copy link
Contributor

Prefix the label with the network name? E.g works.weave.weave1.foo

@mrjana
Copy link
Contributor

mrjana commented May 21, 2015

@squaremo @dave-tucker @mavenugo Is it very important to support to support multiple networks in docker run command. docker run can be used to join the initial network. There are always going to be network service join commands to join additional endpoints after the initial join. I know this is racy to some extent if the application wants to communicate in the "cluster" network immediately but then we are probably risking introducing something hastily here. Please let me know what you guys think?

Also the labels in the docker run command are always going to be considered as container labels and are given to the driver only on joins

@shettyg
Copy link

shettyg commented May 21, 2015

There are always going to be network service join commands to join additional endpoints after the initial join. I know this is racy to some extent if the application wants to communicate in the "cluster" network immediately but then we are probably risking introducing something hastily here.

@mrjana, the raciness may be important IMO. Many badly written applications will simply fail when they can't reach out to peers. So you are effectively expecting applications to be written in a way that IP reachability to a peer should always have retry logic.

@mrjana
Copy link
Contributor

mrjana commented May 21, 2015

@shettyg I am not saying solving the raciness is not important. But it is much more important to get the UI right, otherwise it is very difficult to revert it back.

@bboreham
Copy link
Contributor

@shettyg I think the race condition manifests in ways that are worse than "simply fail". E.g. if your container has a service that listens on 0.0.0.0, it will listen on all interfaces that are active at the time you do the listen; it will not add interfaces that are added later.

@squaremo
Copy link
Contributor Author

I know this is racy to some extent if the application wants to communicate in the "cluster" network immediately

It's already the case that you can add a weave interface to a container after the fact; avoiding the extra command (and the entailed race) is the primary motivation for developing a plugin.

Weave needs containers to have an interface on the bridge network as_well, and for this not to be racy or require extra commands, since it's used to provide name resolution.

@shettyg
Copy link

shettyg commented May 26, 2015

Since endpoint creation has been separated out, would something like the following not work for anyone?

  • docker network create NETWORK --driver=driver1 --labels foo=bar
  • docker endpoint create NETWORK --labels alice=bob

With the UUID returned by the endpoint creation (Similar syntax as --net=container:containerid):

  • docker run --net=bridge --net=driver1:EP_ID1 --labels driver1.ep_id1=foo --net=driver2:EP_ID2 --labels driver2.ep=bar command

The above only calls join() and skips createendpoint().

@squaremo
Copy link
Contributor Author

@shettyg There is an important difference between an endpoint that is created during docker run and an endpoint created ad-hoc; the former is garbage collected when the container stops, but the latter will hang around until deleted explicitly. That latter behaviour requires much more diligence on the part of the user, especially since it won't necessarily be obvious when those endpoints can reasonably be deleted.

@shettyg
Copy link

shettyg commented May 26, 2015

@squaremo
I see what you mean. I wonder whether we can do both. For e.g., one could do:
docker run --net=bridge --net=driver1:mynet

In the above case, you would call createendpoint() and join()

Alternatively,
docker run --net=bridge --endpoint=driver1:uuid

In the above case, you would only call join()
My suggestion likely won't fly because now there is a new docker cli option '--endpoint'.

(According to @mrjana, labels provided to createndpoint() and join() are different, so that is another constraint with which this has to work)

@squaremo
Copy link
Contributor Author

I believe moby/moby#13441 will address this issue, in part, but seems caught up in bikeshedding :(

@shettyg
Copy link

shettyg commented May 28, 2015

It looks like Solomon's 'docker run --publish-service db.work-dev' is similar to my 'docker run --endpoint=driver1:uuid'

@tomdee
Copy link
Contributor

tomdee commented Jul 14, 2015

@mavenugo @dave-tucker can this issue be closed now that it is tracked under moby/moby#14593

@dave-tucker
Copy link
Contributor

Thanks @tomdee, closing in favour of moby/moby#14593

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants