Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding static IP option (--ip and --ip6) for Docker Services #29816

Closed
ventz opened this issue Jan 2, 2017 · 7 comments
Closed

Adding static IP option (--ip and --ip6) for Docker Services #29816

ventz opened this issue Jan 2, 2017 · 7 comments
Labels
area/networking area/swarm kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@ventz
Copy link

ventz commented Jan 2, 2017

Following up to with a dedicated issue for:

https://github.com/docker/docker/issues/25303#issuecomment-269867258

This is a feature request/suggestion for adding a static IP option on the VIP overlay (ingress network) for Docker Services.

Real example to summarize need:

  • You have 50 VMs, each running Docker. They are all on 10.0.0.0/24, that is:
    10.0.0.101
    10.0.0.102
    10.10.0.103
    ...etc...

  • You want to run different web services for customers and for each at least one overlay net. Let's assume you have:
    cust01
    cust02
    cust03
    ...etc...

  • Each customer has multiple public IPs dedicated to them:

** cust01 has:
1.2.3.4
5.6.7.8

** cust02 has:
9.10.11.12
13.14.15.16
17.18.19.20
21.22.23.24

** cust03 has:
25.26.27.28
29.30.31.32

...etc...

  • Each PUBLIC IP is mapped to specific RFC1918 customer IPs.
    ex:
    1.2.3.4 -> 10.0.0.251
    5.6.7.8 -> 10.0.0.252
    ...etc..

  • Now you want to distribute each customer's services (let's assume a simple web server).
    You create an overlay net for each customer, and you deploy their web containers.

  • Let's assume that:
    ** each container runs on tcp/80
    ** you will have some sort of a LB/haproxy/nginx solution that will backend to tcp/80 by service name (overlay DNS), and then expose tcp/80 externally

So you have:
cust01-webservice (10 containers, each running on tcp/80, with a publish port of tcp/80 via HAproxy)
cust02-webservice (10 containers, each running on tcp/80, with a publish port of tcp/80 via HAproxy)
...etc...

Here's the problem

You deploy a LB/Proxy (haproxy, nginx, etc) for each customer. You publish/expose it on tcp/80. That points to the service VIP (cust01-webservice-container01, cust01-webservice-container02, etc...)

Now you need to publish that tcp/80 as a serviceport so that you can reach it externally. You are now essentially limited because you cannot have more than 1 customer/service.

So now you have:
1.2.3.4 -> 10.0.0.101 -> (haproxy/nginx) -> VIP(webservice-container01, webservice-container02, webservice-container03,etc...)

As you publish the first customer, that takes over all 0.0.0.0:80.
You can't specify only to publish it on 10.0.0.101

So now you have published cust01's web service (via their LB/proxy/nginx) to ALL of your local IPs in the swarm (since all nodes)

^ This is a very very common scenario, and I think everyone who is not using public cloud resources, but instead using private datacenters or AWS VPC with full control of the subnet like setups is running into this.

Additional Information/Explanation

A few uses cases as examples:

  • If "docker service create" is designed to replace "docker run", then this simply makes sense as a "one to one" feature of being able to port the functionality of adding a static IP.

  • In datacenters where the setup includes existing blocks of routed RFC1918 subnets, network plugins like macvlans for the overlay network make perfect sense. At that point, to bridge into the existing routed subnet, it's difficult to have a "dynamic" service VIPs. You cannot pre-configure your infrastructure. Generally you will open a few IPs as the "load balancer/externally available" IPs, or a few that one a 1-1 NAT.

In addition to this -- generally you want to utilize multiple internal IPs. You have two options as of now:
1.) Publish and utilize a single IP
or
2.) Alias different IPs on the host nterfaces, and publish -- which will publish that port to ALL of the IPs -- which is no good.

You are essentially limited to having 1 docker host PER each non-routed internal IP. This is a huge limitation.

An example use case would be to run multiple services on the same port (in the example of a web service or a load balancer for different apps). As soon as for example you run a service on tcp/80, all of your nodes in your cluster will use up that port. You are now limited on running other services on tcp/80. With the ability of providing the IPs, this is very easy and portable. Currently, without static IPs, you would have to assign the IPs as aliases on the docker server interfaces, and then configure each load balancer to listen to that particular IP. This is messy, and it also makes it much harder to add more IPs. It also makes it harder to move across servers/environments.

Also related to #2, but technically dedicated to LBs - if you are using a LB (ex: nginx, haproxy, etc), you would need to have it run as a service onto the swarm distributed overlay network, in order to plug into the existing services. At that point, you will need to know those IPs ahead of time in order to map them as destination NATs. Without SDNs, this becomes a much more complicated task, as you cannot pre-map your infrastructure/network side ahead of time. There is also the potential that it will dynamically change on a re-create/re-deploy.

@thaJeztah thaJeztah added area/networking kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny labels Jan 2, 2017
@ventz
Copy link
Author

ventz commented Jan 6, 2017

Just referencing a few other good discussions about this:

#24317
And
#26696
And
#29176
And
#30963

@galindro
Copy link

+1

@ventz
Copy link
Author

ventz commented Jan 20, 2017

@thaJeztah - it seems like a whole lot of people were hoping this would make it into the 1.13 release.

Is there any news on this making it into 1.13.1 or any upcoming release?

I did notice this in the release notes:

Which might be a temporary way of achieving the end-goal, since it sounds like you can attach the LB/HAproxy with "docker run" to a specific static IP and at the same time to the swarm overlay network, and then have a static IP hit the overlay VIP. Essentially a bridge from the routed network into the overlay network. (Although not sure if that will be allowed -- testing now)

update: looks like you can't do this. If you use the --attachable network and specify it to docker with docker run, that becomes the secondary. The only primary you can have is the default docker_gwbridge.

@nickweedon
Copy link

+1

@cpuguy83
Copy link
Member

Closing since this is a dup of #24170

@azzeddinefaik
Copy link

+1

@bradmunz79
Copy link

I need this so bad. Our entire move to Docker revolves around the ability to do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/swarm kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests

8 participants