New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static/Reserved IP addresses for swarm services #24170
Comments
Agreed, this is crucial functionality for some services. I suspect it might be a bit complicated to implement with in new swarm model, as for every 'service' at least two IP addresses exist: one Virtual LB IP for the service itself and then N of additional IPs, where N = number of replicas. I think what we really need is an option to deploy a service without replicas and LB layer - just with simple static IP configured (but still managed by swarm, with clustering, HA, failover, etc). |
This could be very useful, as I'm currently struggling with ActiveMQ Network of Brokers, and the |
I am facing the same issue! is there any update to set a static ip for a container in swarm mode ? |
Please don't leave +1 comments on issues, you can use the 👍 emoji in the first Implementing this feature is non-trivial for a number of reasons;
Just Also see #29816, which has some information for one use-case. I removed the @a-jung |
Essentially, I'd like to use Docker Swarm's routing mesh as a load balancer. Being able to assign a (public) IP to a Docker Swarm service (not an individual container!), one could simply add the IP to one's DNS provider (e.g. CloudFlare). For example I could then run my S3 service and web server with:
Swarm then tells the node which has the appropriate IP in its subnet to forward all request from This way, one could also run multiple services on the same port within the same Swarm cluster (assuming different IP addresses and different domains). This is currently only possible with an additional load-balancer such as HA-Proxy (which only supports TCP for example). |
Some licensed applications require a static IP for the license. I have licensed applications that I would like to deploy to Docker Swarm that require either static IP or MAC address for licensing. I think it would be initially acceptable to state a limitation that specifying either a static IP or MAC for a service implies that the scale must be one. Perhaps when scaling, the new replicas will fail to start with a "static IP required" error message informing the user that they need to go back and provide static IP for a "static IP" service to start correctly? So in |
This may help some people: if you combine the 'hostname' setting for a service with the 'endpoint-mode' setting to dnsrr you get a hostname that resolves to the container IP. This may make some software happy to run in the swarm. |
any news ? can we assign a static ip to a service ? |
I searched around and tried different possibilities and I was able to assign static ips to containers. I am pretty new to docker, I don't know if this is the right way though. I created a swarm. In manager, I created an attachable overlay network with a subnet.
My docker compose:
Note the configuration under default |
We recently ran into a situation where the ability to reserve a VIP for a service would really help. We have a zookeeper service named "zoo1" and other services are connecting to it using "zoo1:2181". We found sometimes when we shut down and restart the "zoo1" service, the IP address resolved by host name "zoo1" changed. The reconnection mechanism in the zookeeper client library does not try redoing the DNS lookup when it enters the retry loop, instead, it holds onto the previous IP address. As a result, even we bring the "zoo1" service back online, other services are never able to re-establish the connection to zookeeper anymore. By the way, under what exact circumstance would trigger a change to a VIP of a service? |
@cpclass you example don't work |
I don't see (As it stands, services can't easily both talk to other services in the cluster and do multicast with other machines on the host network, which is what this sort of thing would enable.) (Edit: there's a great post about how
|
To add a usecase. We want to add some services like
as a dedicated services stack within our swarm. Some of the services contain patches or even are our own implementations. Ideally they are tested by AQA and deployed and teared down automatically. Some need to talk to other stacks like the productions stacks. Some of these services need to be reachable with a fixed IP from internal and/or external networks. |
another use case: use a container as DNS server for other containers. (this could be easier to achieve if you could use a docker container hostname as dns server entry via --dns) |
Another use case: |
Another use case: |
Use case: Running a STUN or TURN server, or other similar type of server used to help p2p clients find each other to accomplish p2p discovery and NAT Traversal. This is required for technologies like WebRTC. |
Use case: Samba ad domain controller, with a changing IP it just makes s mess of its DNS zone file and is not usable. |
Another use case: setting up a mysql cluster (mgm + ndb + sql nodes), requires giving the nodes IPs in the conf files |
@khba that applies to a lot of other clustered things. For example Vertica database also uses static IPs for all nodes in the cluster. |
Allow me to introduce you to the standard operating procedure for such things:
If you leave your DNS alive and you scale down / remove IPs, everything is going to fail, no matter if it's docker or whatever.
To me it looks like you do not have real administration experience in general. Before moving on to concepts like service redundancy, scaling and containerization, you have to understand how all of the platform's underlying services work and then you also have to understand the services you're going to deploy yourself. If you don't know how to manage those services individually, you cannot possibly hope to manage them in a more complex environment. It doesn't matter where the DNS is; it's still a DNS and the same principles apply.
Can you explain to me why you're on a holy war about DNS and its records' TTL, when these have nothing to do with my proposal of how Stop derailing this topic. |
Thanks @FrostbyteGR for clarifying it in a way that hopefully is clear to even someone who even think about using an internal Docker/Kube DNS for serving public records 😲 Anyway as stated previously I really like your design and would love to see it implemented in swarm. |
Also, to avoid confusion and really showcase why this is supposed to be static/predictable; I believe I should explain in more detailed examples, how my suggestion is supposed to work (I also added a range option to it + another temporary solution suggestion): Option 1. Require an amount of static IPs equal to that of the number of requested replicasIndividual assignment (aka: put in whatever you want, as long as it's valid):
Case 1 - Deploying the stack Case 2 - Downscaling to 2 replicas Case 3 - Upscaling back to 3 replicas Case 4 - One or more replicas crashes Range assignment:I ultimately decided to also include this as an option in my suggestion, because it can operate the same way.
The Case 1 - Deploying the stack Case 2 - Downscaling to 2 replicas Case 3 - Upscaling back to 3 replicas Case 4 - One or more replicas crashes
Option 2. Make
|
any news here ? |
hi guys, any update, is swarm going to end its life despite what Mirantis said 🐶 |
Still evaluating how to best meet this aging need. Maybe others are facing this, too. Our simple need is Options We're seeing Docker Swarm 'take over' all the hosts's IP addresses via iptables (despite having defined a Docker Network thusly: #!/bin/bash In other words, it's not the host IP binding but that Docker iptables manipulation captures all port 80/443/etc to all IPs on the host. Therefore, this looks most immediately applicable, (from D) Quoting user name NewsNow1 ----- We had a need to publish separate docker swarm services on the same ports, but on separate specific IP addresses. Here's how we did it. Docker adds rules to the DOCKER-INGRESS chain of the nat table for each published port. The rules it adds are not IP-specific, hence normally any published port will be accessible on all host IP addresses. Here's an example of the rule Docker will add for a service published on port 80: iptables -t nat -A DOCKER-INGRESS -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.18.0.2:80 (You can view these by running iptables-save -t nat | grep DOCKER-INGRESS). Our solution is to publish our services on different ports, and use a script that intercepts dockerd's iptables commands to rewrite them so they match the correct IP address and public port pair. For example: service #1 is published on port 1080, but should listen on 1.2.3.4:80 We then configure our script accordingly: ###! cat /usr/local/sbin/iptables REGEX_INGRESS="^(.DOCKER-INGRESS -p tcp) (--dport [0-9]+) (-j DNAT --to-destination .)" SRV_1_IP=1.2.3.4 ipt() { if [[ "$*" =~ $REGEX_INGRESS ]]; then echo "REQUESTED: $@" >>/tmp/iptables.log case "$PORT" in echo "PASSING-THROUGH: $@" >>/tmp/iptables.log N.B. The script must be installed in dockerd's PATH ahead of your distribution's iptables command. On Debian Buster, iptables is installed to /usr/sbin/iptables, and dockerd's PATH has /usr/local/sbin ahead of /usr/sbin, so it makes sense to install the script at /usr/local/sbin/iptables. (You can check dockerd's PATH by running cat /proc/$(pgrep dockerd)/environ | tr '\0' '\012' | grep ^PATH). Now, when these docker services are launched, the iptables rules will be rewritten as follows: iptables -t nat -A DOCKER-INGRESS -d 1.2.3.4/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.18.0.2:1080 The result is that requests for http://1.2.3.4/ go to service #1, while requests for http://1.2.3.5/ go to service #2. The script can be customised and extended according to your needs, and must be installed on all nodes to which you will be directing requests, and customised to that node's public IP addresses. More on this added here -- Some posts suggest removing docker_gwbridge network, then recreating it with docker network create -o "com.docker.network.bridge.host_binding_ipv4"="192.168.1.151" docker_gwbridge, then running in host mode. Because docker_gwbridge is used by all services running on node's engine (swarm or not), this "solution" precludes adjusting some services and some IP addresses -- it's too much, but in the opposite direction. Usually when simple requests encounter such incredible problem it indicates fundamental, overwhelming difference -- like squirting a garden hose upstream into a river. |
I figured out a workaround for those that absolutely cannot do without static IP addresses inside their swarm. DISCLAIMER: For this example we will assume the following topology:
Then, before I launch my service, I create a network exclusively for it. Afterwards, I go to any of my managers and expand my network's scope to the swarm. This way we're enforcing the internal DHCP to assign the IP we want. |
I've been tearing my hair out trying to either figure this out or work around it. |
It's already boring enough to have to run glusterfs of each node of a swarm just to not have some containers reset to 0 or rollback when the node they usually ran on goes down for X reason. Now imagine with metric servers and data bases. Here my use-case: Let's say I want to collect metrics, logs and stuff. For maximum uptime I wanna have these services run on the swarm for high availability. Wouldn't it be better if these services just spawned on the same real network all these hosts share on an IP that doesn't change, no matter which node is up or down? It would solve soooo much headaches. |
@White-Raven I am afraid you are messing unrelated things together. |
@johny-mnemonic Oh well, funnily enough just a week ago I was way more 'green' concerning docker swarm than I am now, and I had misunderstandings about docker swarm's networking. I ran glusterFS was to sync up the containers' config files and data, have some persistent storage. It's an homelab/dev environment meaning I'm trying stuff, so hard resets or kernel panics aren't unheard of, so having a sturdy piece of volume syncing software for high availability was very welcome. |
@White-Raven no worries, we all start green ;-) |
@johny-mnemonic you can directly message me on the reddit account I linked onto my github presentation! Managed to get the swarm working on this unsupported setup, but ended up giving up on LXC containers for swarm/k3s applications and going full VMs instead because of the amount of tweaking has to be done to make it work with the proxmox kernel and apparmor, which can break on updates.) |
Hi, may I humbly ask if there is any update on this and whether this feature is likely to be implemented? I too have had some challenges with being unable to use static IP addresses for Docker Swarm containers and have had to use some workarounds like setting the IP range to /32 subnet on docker networks, which is a pain to do with MACVLAN because you can only have one docker network per node with the same gateway which means some of my containers using MACVLAN have been constrained to one device, defeating the purpose of having docker in swarm mode in the first place. Truly appreciate on the works and efforts that go into this project as I love using docker! Thank you. |
We solved the issue through using docker in docker. So we have a "proxy" service with replica 1 for each kind of real service we want to create with a static IP (MACVLAN), that use At the end it is a huge pain, that the options of swarm are no superset of der normal container commands, since you start to develop something with just one container and then at the end it breaks when you want to add failover capability through Swarm. At the end, I think the only thing that is missing is a proper plugin for the internal DHCP, that provided static assignment... I have forgotten the name of this subcomponent, but when we faced the problem now 4 or 5 years ago, there were no willingness to create this component, since they already exist as fee-paying services and that the need doesn't match the needs of the majority of the users. That the situation as it is is broken from my point of view seems to be ok. I have therefore not much hope that things will change anytime in the future and at the end people will be forced to move to kubernetes, so that the project is dead at some point. EDIT: |
Another use case I'm facing : I'm trying to run several FTP servers (~ 100), which must be accessible to clients through dedicated IPSec tunnels (IPSec is handled on my firewall, with P2 restricted to the /32 of the corresponding FTP server). This is not possible without static IP. Each FTP requires 1 TCP port for the control channel and 100 TCP ports for the passive data channels. So it's not manageable to expose them using the swarm mesh. Creating an ipvlan network, I can connect containers directly to my firewall. The only missing bit is the ability to set a static IP to my containers. I tried writing a custom IPAM service, but the problem is that there's nothing to identify the container in the POST request on /IpamDriver.RequestAddress. At best I can get the MAC addr (If my driver sets RequiresMACAddress to true), but as we can't set a fixed mac addr for services either, all we get is a randomly generated MAC ... |
From five years ago, until now, it was exposed diverse use cases that needs a swarm static IP. Nobody cares about it. I think that is a good exercise to forget docker swarm as a network friendly container runtime. Good luck. |
Indeed, and that's too bad... Swarm is really a simple and elegant solution, which would filful a lot of use case (if it wasn't abandonware) |
yes, SWARM is quite simple to setup and use compared to Kubernetes. Nevertheless, it seems to me, that there is not much progress and visions for the future of it. As mentioned, 5 years - in words FIVE years - since this problem was addressed and zero progress. It is to some degree not bad, that SWARM doesn't move as fast as Kubernetes, but maybe a bit faster wouldn't be bad. I have still a ticket open since more then a year, that just get no attention. Even a simple "wontfix" would be better then just no reaction. |
Hi folks, I basically love Docker and discover new things every day. I am a network admin myself and I can only shake my head at many things in the Docker network implementation. I've been playing with Swarm for two days and I'm already seeing a new network problem.... Static IP in Swarm mode :/ |
Oh, guys. I run into this problem a couple of times a year. And I cry every time. |
I think this application bug would be easy to fix via staic IP address milvus-io/milvus#25032 . |
This is a useful feature but sadly it is still not yet implemented at swarm level. I don't expect such a feature disparity between docker compose and docker swarm. Is there any plan to implement it in the near future? This proposal makes sense to me #24170 (comment) |
don't know. try and please report back. |
There are some things that I want to run on docker, but are not fully engineered for dynamic infrastructure. Ceph is an example. Unfortunately, its monitor nodes require a static ip address, otherwise, it will break if restarted. See here for background: ceph/ceph-container#190
docker run
has an--ip
and--ip6
flag to set a static IP for the container. It would be nice if this is something we can do when creating a swarm service to take advantage of rolling updates and restart on failures.For example, when we create a service, we could pass in a
--static-ip
and--static-ip6
option. Docker would assign a static ip for each task for the life of the service. That is, as long as the service exists, those ip addresses would be reserved and mapped to each task. If the task scales up and down, then more ip addresses are reserved or relinquished. The ip address is then passed into each task as an environment variable such asDOCKER_SWARM_TASK_IP
andDOCKER_SWARM_TASK_IP6
The text was updated successfully, but these errors were encountered: