New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker SWARM labels on nodes are not recognized after update to docker v17.05.0-ce with docker stack deploy and yaml #33338
Comments
I tried with a super simple whoami service on the 5 nodes and it seems to work on different nodes with the scaling
|
Can it be that the command poses problem with the template variables ?????
|
@gdeverlant I'm not aware of constraint change between 17.03 and 17.05. There could be different reasons. A few things to check are node resource availability (e.g., your task request certain amount of memory that the node doesn't have), node plugin availability, node network/volume availability (the request network doesn't exist on the node), etc. I noticed in screenshot that some tasks were failing. What's the reason of failing? You can use If it's not clear, you may simplify your service to test what specs in the YAML start to fail the task spreading result. |
You cannot explain why a simple cluster of whoami seems to work but a litte bit more complex scenario is failing. Don't worry I don't have any memory issues all devices ahve 2GB ram free unused nothing else is running. |
Ok it is said on docker swarm documentation that ports should be open on each node for swarm to communicate with each node: https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts This is my iptables for the 10 nodes
|
@gdeverlant When you do docker service scale arangodb3_cluster=5, do the tasks select server8 right away, or they select other nodes but fail, eventually reschedule to server8? You can use docker |
I think that we need to plan a Team Viewer session so that you really believe that is a bug. |
@gdeverlant I don't doubt there is a problem. I'm trying to see what the scheduler's decision on scheduling the tasks. Your input is helpful for us to narrow down the problem. |
It seems also that the templates are not working in docker stack deploy and docker-compose.yml |
This is the error log of the servers :
as you can see the server cannot start because the template variables are not parsed by docker :
|
You can see the output : 'tcp://cluster{{.Task.Slot}}:8529' |
@gdeverlant Let's focus on one problem per issue. You may open a separate issue for template. For your original issue where tasks are not distributed evenly, what's the output from |
This is what I get :
|
From the output of |
Correct! the 2 other servers should be server11 and server10 and not twice server9 and server12. The scheduler is not able to find the 2 other nodes with the same label constraints.
|
Do you think that you can have a look at the other problem with template? link : #33364 |
Description
This seems to be a BUG
After upgrading from 17.03 to 17.05 the same yaml file I used is not deployed as expected by docker.
I have 5 nodes on 10 in my swarm which have the following label constaints on each node:
The last service from the YAML gets deployed on the node number 8 only.
When I update that service to 5 replicas the 5 replicas are deployed on the same node which should not happen. Docker should find the 4 other nodes with the same node labels and spawn one service on each node.
Steps to reproduce the issue:
Describe the results you received:
screenshot
Describe the results you expected:
The scaling should not happen on the same node with 5 instance of the same service but rather the 4 other nodes individually
Additional information you deem important (e.g. issue happens only occasionally):
This problem appeared when I upgraded docker 17.03-ce to 17.05-ce
Output of
docker version
:Manager host
Output of
docker info
:Manager host
Additional environment details (AWS, VirtualBox, physical, etc.):
Manager host
Worker nodes
The text was updated successfully, but these errors were encountered: