-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ingress network(s) not shared across swarm #25386
Comments
Same issue here. We are currently running all our multi-host environments on docker 1.11 using consul. Works perfectly. We created a new environment in digital ocean with 5 hosts. Installed docker 1.12. Made first node as manager. Joined other nodes. Creating services and sharing them across hosts all works fine. Once we update one of the service to change a param, it starts tearing apart. DNS resolution of existing services fails. Load balancing of existing services starts misbehaving. Used --listen-addr and --adertise-addr flags while joining the nodes into cluster. |
You should not be directly using the ingress network, that is for routing On 4 Aug 2016 2:49 a.m., "Jacek" notifications@github.com wrote:
|
@tectoro @justincormack There are no firewalls, time is NTP-synced. Looking at the ammount of similar issues, wasn't 1.12 GA a bit rushed eh...? EDIT
|
Networks are only visible on nodes when a task connected to that network is On 4 Aug 2016 10:52 a.m., "Jacek" notifications@github.com wrote:
|
@justincormack I've raised this issue (#25396) it will hopefully avoids this issue #25386 (comment) in the future. |
@justincormack
Let's proceed:
It's indeed running
Now I can see the network @ worker, but only with worker's own container
And again, from inside the container:
So we made a circle here - you were perfectly right with the overlay network visibility tho. @rogaha We have already drifted away from the ingress network |
@justincormack Actually I started the whole setup with a overlay network. Since it did not work as expected, I tried with the default "ingress" network. Also, I did the whole switch off firewall, NTP sync thing.. No luck. |
Cruising through issues here at github, I have encountered #24996 . Nevertheless, I have started the full stack within
Then, I have checked the logs of
What is that
This might be the right track - see some error stuff from another node
Well, hard to find such
Maybe I have drifted too far ( EDIT |
@Garreat |
@mrjana Just want to add here.. I have been using just the service name e.g. "mongo" which seems to be not working after a couple of service updates in the cluster. It works absolutely fine when we create all the services afresh. Once we start updating some of the services, they are either not able to resolve the service name or not able to reach the service port. Also, I have observed that the service to container mapping gets corrupted sometimes where in when I hit a service I see some other service processing that request. |
@tectoro If you are seeing issues after a service update you are most like hitting #24789 or another any number of issues created in GH which is a variant of that. This is already fixed here moby/libnetwork#1370 and it will make it's way to docker in a 1.12.1 bug fix release. |
I think you are right @mrjana . This
How would you relate to this https://medium.com/@lherrera/poor-mans-load-balancing-with-docker-2be014983e5#.epn5cwcd6 ? Thank you, now it's mostly clear!
|
@Garreat If you want to terminate https in your LB you need nginx. If you can terminate them in your app you directly use the built-in load balancer. But even using nginx as an LB is pretty easy. You can create nginx as a service with ports exposed. In addition to that you can connect nginx service and your app service in the same network. Your app service can be run in default vip mode or dnsrr mode. If you run your app service in dnsrr then when your query the app service name you will get the IPs of all the containers part of the service. You can populate nginx using that information. If you run your app service in vip mode, then you can configure your nginx to just point to the service name as the backend and vip will automatically do the load balancing. |
That helps a lot, thank you! Which mode is the default? Awesome stuff, thank you so much @mrjana . 5star |
@Garreat vip mode is the default. BTW, I am going to close this issue. Please reopen it if you don't think this should be closed. We can still continue the conversation here even if the issue is closed. |
Ok, with this knowledge I got my app running pretty nicely. I have seen this statement that eventually
Thank you |
@Garreat subscribe to #25303 to follow the discussion on additional options being added to services
Looks like #24469
|
@thaJeztah that's good; I'm looking forward for the functionality descripted in #24469 , Also, I experience some mesh/dns misbehaviors descripted by @tectoro . Hopefully fixed in 1.12.1. I'm ok with closing this thread. Thanks again, the beer is on me! |
Cheers! 🍻 |
I could use some clarification, as I'm running into this issue and it sounds like there's a lot of red herrings going on here (in particular improper use of the Here's my situation:
Is this issue potentially fixed in Additionally the official documentation appears to be behind 1.12 - We don't actually need an external key-value store any longer, correct? If I'm reading the above right, it's basically saying to use a load balancer to balance between each host in the swarm, and circumvent Docker's internal load balancing - am I reading that right? EDIT I got this working on AWS by also opening up all connections and protocols in the AWS security group. See here for docs on ports that need to be open and able to communicate, although these might be slightly old for version 1.12. |
In my case, I nginx balance service
SSL termination part:
etc. This syntax is enough, as I run Keep in mind that my environment is more classic static VMware / physical farm. No shiny AWS load balancer here ;). Btw for now it's clear that once you fiddle with This new swarm mode is different to the old one. You select an approach:
Another indirect implication is... if you want your application to pull any cluster state info, you place it on |
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
VMware, physical
All machines in same VLAN (all traffic open); OS firewalls disabled
Steps to reproduce the issue:
Describe the results you received:
on node 1:
on node 2:
Describe the results you expected:
Inspecting the network should show all containers attached.
Meanwhile, I can only see the task containers running on the node where
docker network inspect ingress
was ran.Effectively, my whole setup makes no sense. 502 bad gateway.
Additional information you deem important (e.g. issue happens only occasionally):
This was ran inside app.2:
I have created another overlay network on master node, but only second master node could see it in
docker network ls
output (?).All nodes are in same VLAN (all traffic open); OS firewalls disabled
I had no issue running this entire multi-host overlay stack pre-1.12.
Suggestions?
The text was updated successfully, but these errors were encountered: