Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

template variables not parsed in docker-compose.yml with docker swarm on docker v17.05.0-ce #33364

Closed
al-sabr opened this issue May 24, 2017 · 8 comments

Comments

@al-sabr
Copy link

al-sabr commented May 24, 2017

Description

It seems also that the templates are not working in docker stack deploy and docker-compose.yml

https://github.com/moby/moby/blob/master/docs/reference/commandline/service_create.md#create-services-using-templates

Found this bug while finding #33338

Steps to reproduce the issue:

  1. use any docker-compose.yml with any template variable
version: "3"

services:

  agency: 
    image: arangodb/arangodb-arm64:3.2
    environment:
      - ARANGO_NO_AUTH=1
    command: arangod --server.endpoint tcp://0.0.0.0:8529 --agency.my-address tcp://agency:8529 --server.authentication false --agency.size 1 --agency.activate true --agency.supervision true --log.file /var/log/arangodb3/arangod.log
    volumes:
      - datas:/var/log/arangodb3
    networks:
      - traefik-net
    deploy:
        labels:
            - traefik.docker.network=traefik-net
        placement:
            constraints:
            - node.labels.isAgency==true
            - node.labels.arch==arm64
            - node.labels.hasHDD==true
                
  coordinator:
    image: arangodb/arangodb-arm64:3.2
    environment:
      - ARANGO_NO_AUTH=1
    command: arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://coordinator:8529 --cluster.my-local-info coordinator --cluster.my-role COORDINATOR --cluster.agency-endpoint tcp://agency:8529 --log.file /var/log/arangodb3/arangod.log
    #command: arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://{{.Service.Name}}:8529 --cluster.my-local-info {{.Service.Name}} --cluster.my-role COORDINATOR --cluster.agency-endpoint tcp://agency:8529 --log.file /var/log/arangodb3/arangod.log
    volumes:
      - datas:/var/lib/arangodb3
      - datas:/var/lib/arangodb3-apps
      - datas:/var/log/arangodb3
    networks:
      - traefik-net
    deploy:
        labels:
        - traefik.port=8529
        #- traefik.enable=true
        #- traefik.frontend.entryPoints=http
        - traefik.docker.network=traefik-net
        placement:
            constraints:
            - node.labels.arch==arm64
            - node.labels.isCoordinator==true
            - node.labels.hasHDD==true
                
    ports: ['8529:8529']
    depends_on:
      - agency
      
  cluster:
    image: arangodb/arangodb-arm64:3.2
    environment:
      - ARANGO_NO_AUTH=1
    command: arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://cluster{{.Task.Slot}}:8529 --cluster.my-local-info cluster{{.Task.Slot}} --cluster.my-role PRIMARY --cluster.agency-endpoint tcp://agency:8529 --log.file /var/log/arangodb3/arangod.log
    volumes:
      - datas:/var/lib/arangodb3
      - datas:/var/log/arangodb3
    networks:
      - traefik-net
    deploy:
        labels:
            - traefik.docker.network=traefik-net
        placement:
            constraints:
            - node.labels.arch==arm64
            - node.labels.hasHDD==true
            - node.labels.isDBReplicate==true
            
    depends_on:
      - agency
      - coordinator

networks:
    traefik-net:
        external: true
            
volumes:
  datas:
    driver: local
    driver_opts:
        type: volume 
        mountpoint: /mnt/virtual/docker/volumes/arangodb3

Describe the results you received:

This is the error log of the servers :

2017-05-23T22:18:46Z [1] INFO ArangoDB 3.2.devel [linux] 64bit, using VPack 0.1.30, ICU 58.1, V8 5.7.0.0, OpenSSL 1.0.1t  3 May 2016
2017-05-23T22:18:46Z [1] INFO using storage engine mmfiles
2017-05-23T22:18:46Z [1] INFO Starting up with role COORDINATOR
2017-05-23T22:19:46Z [1] INFO {cluster} Fresh start. Persisting new UUID CRDN-8285eaf9-a9ab-47a5-9892-8855733d23a2
2017-05-23T22:19:46Z [1] INFO Waiting for DBservers to show up...
2017-05-23T22:19:46Z [1] INFO Found 2 DBservers.
2017-05-23T22:19:46Z [1] INFO {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2017-05-23T22:19:50Z [1] INFO Cluster feature is turned on. Agency version: {"server":"arango","version":"3.2.devel","license":"community"}, Agency endpoints: http+tcp://agency:8529, server id: 'CRDN-8285eaf9-a9ab-47a5-9892-8855733d23a2', internal address: tcp://coordinator:8529, role: COORDINATOR
2017-05-23T22:19:50Z [1] INFO using heartbeat interval value '1000 ms' from agency
2017-05-23T22:19:51Z [1] INFO using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
2017-05-23T22:19:57Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:19:57Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:19:57Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:19:57Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:19:58Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:19:58Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:00Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:00Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:03Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:03Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:10Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:10Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:20Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:20Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:30Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:30Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:41Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:41Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:20:51Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:20:51Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:02Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:02Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:12Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:12Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:22Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:22Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:32Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:32Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:42Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:42Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:53Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:53Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s5:/_db/_system/_api/collection/s5/count
2017-05-23T22:21:53Z [1] WARNING {queries} slow query: 'FOR doc IN @@collection FILTER doc.`user` == "root" LIMIT 0, 1  RETURN doc', took: 116.288091
2017-05-23T22:21:53Z [1] ERROR In database '_system': Executing task #4 (addDefaultUserSystem: add default root user for system database) failed with exception: ArangoError 1478: could not determine number of documents in collection (while optimizing plan) ArangoError: could not determine number of documents in collection (while optimizing plan)
2017-05-23T22:21:53Z [1] ERROR     at ArangoStatement.execute (/usr/share/arangodb3/js/server/modules/@arangodb/arango-statement.js:81:16)
2017-05-23T22:21:53Z [1] ERROR     at ArangoDatabase._query (/usr/share/arangodb3/js/server/modules/@arangodb/arango-database.js:80:45)
2017-05-23T22:21:53Z [1] ERROR     at SimpleQueryByExample.execute (/usr/share/arangodb3/js/server/modules/@arangodb/simple-query.js:137:42)
2017-05-23T22:21:53Z [1] ERROR     at SimpleQueryByExample.SimpleQuery.toArray (/usr/share/arangodb3/js/common/modules/@arangodb/simple-query-common.js:340:8)
2017-05-23T22:21:53Z [1] ERROR     at ArangoCollection.firstExample (/usr/share/arangodb3/js/server/modules/@arangodb/arango-collection.js:292:71)
2017-05-23T22:21:53Z [1] ERROR     at Object.exports.save (/usr/share/arangodb3/js/server/modules/@arangodb/users.js:136:22)
2017-05-23T22:21:53Z [1] ERROR     at Object.task (/usr/share/arangodb3/js/server/upgrade-database.js:518:21)
2017-05-23T22:21:53Z [1] ERROR     at runTasks (/usr/share/arangodb3/js/server/upgrade-database.js:274:27)
2017-05-23T22:21:53Z [1] ERROR     at upgradeDatabase (/usr/share/arangodb3/js/server/upgrade-database.js:346:16)
2017-05-23T22:21:53Z [1] ERROR     at upgrade (/usr/share/arangodb3/js/server/upgrade-database.js:787:12)
2017-05-23T22:21:53Z [1] ERROR In database '_system': Executing task #4 (addDefaultUserSystem: add default root user for system database) failed. Aborting init procedure.
2017-05-23T22:21:53Z [1] ERROR In database '_system': Please fix the problem and try starting the server again.
2017-05-23T22:21:53Z [1] ERROR upgrade-database.js for cluster script failed!
2017-05-23T22:21:56Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:56Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:21:56Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:56Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:21:57Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:57Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:21:59Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:21:59Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:02Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:02Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:10Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:10Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:20Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:20Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:31Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:31Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:41Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:41Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:22:51Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:22:51Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:01Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:01Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:12Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:12Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:22Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:22Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:32Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:32Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:43Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:43Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:53Z [1] ERROR {cluster} cannot create connection to server 'PRMR-91196b2a-0adb-42d5-95b6-5e96d3df32f4' at endpoint 'tcp://cluster{{.Task.Slot}}:8529'
2017-05-23T22:23:53Z [1] ERROR {cluster} ClusterComm::performRequests: got BACKEND_UNAVAILABLE or TIMEOUT from shard:s8:/_db/_system/_api/collection/s8/count
2017-05-23T22:23:53Z [1] WARNING {queries} slow query: 'FOR doc IN @@collection  RETURN doc', took: 117.358452
2017-05-23T22:23:53Z [1] ERROR ArangoError: could not determine number of documents in collection (while optimizing plan)
2017-05-23T22:23:53Z [1] ERROR     at ArangoStatement.execute (/usr/share/arangodb3/js/server/modules/@arangodb/arango-statement.js:81:16)
2017-05-23T22:23:53Z [1] ERROR     at ArangoDatabase._query (/usr/share/arangodb3/js/server/modules/@arangodb/arango-database.js:80:45)
2017-05-23T22:23:53Z [1] ERROR     at SimpleQueryAll.execute (/usr/share/arangodb3/js/server/modules/@arangodb/simple-query.js:96:42)
2017-05-23T22:23:53Z [1] ERROR     at SimpleQueryAll.SimpleQuery.hasNext (/usr/share/arangodb3/js/common/modules/@arangodb/simple-query-common.js:388:8)
2017-05-23T22:23:53Z [1] ERROR     at refillCaches (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/manager.js:266:17)
2017-05-23T22:23:53Z [1] ERROR     at Object.initializeFoxx (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/manager.js:1493:3)
2017-05-23T22:23:53Z [1] ERROR     at Object.foxxes (/usr/share/arangodb3/js/server/bootstrap/foxxes.js:64:47)
2017-05-23T22:23:53Z [1] ERROR     at server/bootstrap/cluster-bootstrap.js:57:54
2017-05-23T22:23:53Z [1] ERROR     at server/bootstrap/cluster-bootstrap.js:61:2
2017-05-23T22:23:54Z [1] ERROR JavaScript exception in file '/usr/share/arangodb3/js/server/modules/@arangodb/foxx/queues/index.js' at 108,7: TypeError: Cannot read property 'save' of undefined
2017-05-23T22:23:54Z [1] ERROR !      throw err;
2017-05-23T22:23:54Z [1] ERROR !      ^
2017-05-23T22:23:54Z [1] FATAL {v8} error during execution of JavaScript file 'server/bootstrap/coordinator.js'
2017-05-23T22:26:32Z [1] INFO ArangoDB 3.2.devel [linux] 64bit, using VPack 0.1.30, ICU 58.1, V8 5.7.0.0, OpenSSL 1.0.1t  3 May 2016
2017-05-23T22:26:32Z [1] INFO using storage engine mmfiles
2017-05-23T22:26:32Z [1] INFO Starting up with role COORDINATOR

Describe the results you expected:

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:
Manager host

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:28:23 2017
 OS/Arch:      linux/arm

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:28:23 2017
 OS/Arch:      linux/arm
 Experimental: false

Output of docker info:
Manager host

Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 379
Server Version: 17.05.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 520
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: x81ofqv53z0n66vvjrrs8i38s
 Is Manager: true
 ClusterID: vg6cba4lt2zpdhywscrdyevex
 Managers: 1
 Nodes: 12
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.1.3
 Manager Addresses:
  192.168.1.3:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 apparmor
Kernel Version: 3.10.104
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 940.9MiB
Name: bambuserver1
ID: 7GHE:CHRG:TDC4:UOTO:3JWM:2ZYU:CHBN:AMIE:W45Y:I5G7:AMSK:ETMY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

@al-sabr al-sabr changed the title template variables not parsed in docker-compose.yml with docker swarm on docker v17-05ce template variables not parsed in docker-compose.yml with docker swarm on docker v17.05.0-ce May 24, 2017
@thaJeztah
Copy link
Member

thaJeztah commented May 24, 2017

This is expected; not all options accept a template. From the section in the documentation you referred to ;

Create services using templates

You can use templates for some flags of service create, using the syntax
provided by the Go's text/template package.

The supported flags are the following :

  • --hostname
  • --mount
  • --env

So the command is currently not part of the list of options that can be templated.

I'll close this issue, because it's not a bug, but feel free to comment after I closed 👍

@al-sabr
Copy link
Author

al-sabr commented May 24, 2017

I don't agree with closing this ticket...

If the template variables are usable in docker service create it should also be valid inside of a docker-compose.yml and docker stack deploy -c docker-compose.yml since it is just a wrapper around docker create service.

Reopen this ticket please.

@thaJeztah
Copy link
Member

@gdeverlant your docker-compose file uses a template in the command. See the linked documentation; it is not supported in docker service create as well

@al-sabr
Copy link
Author

al-sabr commented May 25, 2017

It should be because some services need numbers incremented automatically with the name of each node. This is a use case where DB clustering systems have to statically name each node's name and so on.

@al-sabr
Copy link
Author

al-sabr commented May 28, 2017

Any feedback ?????

@thaJeztah
Copy link
Member

As mentioned, it's not currently supported; having said that, have you considered using --env to template an env-var's value, and use that to set those options?

@al-sabr
Copy link
Author

al-sabr commented May 30, 2017

I don't know how this is working with the command and template variable ... Does your solution work for each node with its dynamic template variable value ?

Do you have any example so that I can try ?

@thaJeztah
Copy link
Member

You can template an environment-variable and use that, e.g.;

docker service create \
  --env "SERVICE_NAME={{.Service.Name}}" \
  --name helloworld \
  alpine /bin/sh -c 'echo $SERVICE_NAME'

Or in your case probably this will work (untested);

docker service create \
  --name=coordinator \
  --env "SERVICE_NAME={{.Service.Name}}" \
  arangodb/arangodb /bin/sh -c 'arangod --server.authentication=false --server.endpoint tcp://0.0.0.0:8529 --cluster.my-address tcp://${SERVICE_NAME}:8529 --cluster.my-local-info ${SERVICE_NAME} --cluster.my-role COORDINATOR --cluster.agency-endpoint tcp://agency:8529 --log.file /var/log/arangodb3/arangod.log'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants