Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: no servers are inside upstream in #438

Closed
mrvini opened this issue May 2, 2016 · 91 comments
Closed

Error: no servers are inside upstream in #438

mrvini opened this issue May 2, 2016 · 91 comments

Comments

@mrvini
Copy link

mrvini commented May 2, 2016

I've updated my proxy image today, and tried to restart all my other containers behind the proxy, however all of them failed, am I doing something wrong? ( i did follow explanations in issue #64, however that didn't help)

proxy

docker run -d --name nginx-proxy \
    -p 80:80 -p 443:443 \
    --restart=always \
    -v /opt/my-certs:/etc/nginx/certs \
    -v /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

my dev container (nodejs) built locally and it exposes port 8181

docker run -d --name www.dev1 \
    --restart=always \
    --link db --link redis \
    -e VIRTUAL_PORT=8181 \
    -e VIRTUAL_PROTO=https \
    -e VIRTUAL_HOST=dev1.mysite.com \
    -v /opt/my-volume/web/dev1/:/opt/my-volume/web/ \
    -v /opt/my-certs:/opt/my-certs:ro \
    -w /opt/my-volume/web/ localhost:5000/www \
    bash -c 'npm start server.js'

Right before I run dev container, i can see output of nginx -t

root@fba41f832f35:/app# nginx -t  
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

After I start dev container, i see the following

root@fba41f832f35:/app# nginx -t        
2016/05/02 07:15:49 [emerg] 69#69: no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
nginx: configuration file /etc/nginx/nginx.conf test failed

when I check /etc/nginx/conf.d/default.conf i see empty upstream

upstream dev1.mysite.com {
}

Is there anything I am doing wrong? I've been using same startup script for a good 6 month and it used to work right before I pulled the new image, did anything changed? Please help

@dehy
Copy link

dehy commented May 2, 2016

Same problem here.
docker 1.9.1cs2 on docker cloud

@dehy
Copy link

dehy commented May 2, 2016

I had to revert back to a72c7e6

@wader
Copy link

wader commented May 2, 2016

@mrvini can you paste docker network inspect $(docker network ls -q)?

@klaszlo
Copy link

klaszlo commented May 2, 2016

Same error here.
I used 0.4.2 nginx-docker container version, and now I updated 0.7.0. nginx-docker version.
docker logs show:

dockergen.1 | 2016/05/02 15:40:51 Generated '/etc/nginx/conf.d/default.conf' from 4 containers
dockergen.1 | 2016/05/02 15:40:51 Running 'nginx -s reload'
dockergen.1 | 2016/05/02 15:40:51 Error running notify command: nginx -s reload, exit status 1
dockergen.1 | 2016/05/02 15:40:51 Watching docker events
dockergen.1 | 2016/05/02 15:40:52 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'

Sorry, 'docker network' command is not available here (ubuntu 15.10)

@klaszlo
Copy link

klaszlo commented May 2, 2016

I have two docker inspect output, inspecting the diff, the only suspecting things are:

working version (0.4.2, 3 months ago):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {},
        "/var/cache/nginx": {}
    },

Non-working version (current, 0.7.0):

"Config": { ...
...
  "Volumes": {
        "/etc/nginx/certs": {}
    },

Working version:

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/d5235bb01d9facc2c58441bed36f9736da1a4bf5e78f3d2d2ff71bef017c6e82",
    "/tmp/docker.sock": "/run/docker.sock",
    "/var/cache/nginx": "/var/lib/docker/vfs/dir/1789a01ddd62eed650a95c874d1d8e504f1455df08e267ebacf5eb36bb293d7b"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true,
    "/var/cache/nginx": true
}

Non-working version (current):

"Volumes": {
    "/etc/nginx/certs": "/var/lib/docker/vfs/dir/07c10059eb20dc6249075c976571d075bc7ac123dd9dec07a8f8651e8c884b39",
    "/tmp/docker.sock": "/run/docker.sock"
},
"VolumesRW": {
    "/etc/nginx/certs": true,
    "/tmp/docker.sock": true
}

@klaszlo
Copy link

klaszlo commented May 2, 2016

I exported both images (docker save -o working.tar).
If needed, then I can put somewhere to further inpect it.
0.4.2 version is 185MB
0.7.0 version is 252MB

update: version 0.4.2 works like a charm. (I scp'd to the new server from the old server).

sudo docker run --restart=always \
-d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy:0.4.2

(the only other difference is the docker.sock:ro difference between the new readme, and the old readme)

@wader
Copy link

wader commented May 2, 2016

@klaszlo hmm not sure i follow. 0.4.2 and 0.7.0 are versions of what? lastest for nginx-proxy seems to be 0.3.0

@mrvini
Copy link
Author

mrvini commented May 2, 2016

@wader, thanks for your comment, that made me look a little deeper and probably it should be the first thing anyone look at.

version for my docker was 1.7.1 which is old, after I upgraded to version 1.11.1, everything works as it should.

as always thanks for a good product and support, if no other support needed for @klaszlo , please close it

@klaszlo
Copy link

klaszlo commented May 2, 2016

@wader Sorry for the noise, I wrongly read the version string inspecting the image file
(sudo docker inspect IMAGEID).

I have no idea what is the exact version of my older nginx-proxy docker image, I know that I locally launched (and therefore imported from docker hub) on "2016-01-26T22:39:17.462882618Z".

I'm on ubuntu 15.10, which ships with docker 1.6.2:
http://packages.ubuntu.com/wily/docker.io

Are you suggesting, that only ubuntu 16.04+ supported?

@wader
Copy link

wader commented May 2, 2016

@klaszlo sorry don't know, but reading the comments for #337 and what @mrvini says it seems docker 1.10+ might be needed

@sherter
Copy link

sherter commented May 2, 2016

The image sha256:c378d9d861c5fa2addf293a20e47318fbea8a7d621afadaa0328c434202a7b3e is broken for me, too (Error running notify command: nginx -s reload, exit status 1). The one before that (sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8) works fine. You can run images by digest like this:
jwilder/nginx-proxy@sha256:d72335ddd6913d5914bebed12b5cf807194416be293f1b732d6ad668691e93b8

$ docker --version
Docker version 1.10.3, build 8acee1b

@benzht
Copy link

benzht commented May 3, 2016

I am using the two-container solution with docker-gen and have the same problem in all my machines.

@wader
Copy link

wader commented May 3, 2016

@benzht hi, what versions of docker?

@benzht
Copy link

benzht commented May 3, 2016

Sorry, forgot the details:

  • docker versions 1.10.3 and 1.11.0
  • nginx:latest, docker-gen:latest, 'nginx.tmpl':latest
  • no user-defined networks used (because they did not work so far)
  • containers started with: ...
    -e VIRTUAL_HOST=aaa.bbb.cc,xxx.bbb.ccc
    -e VIRTUAL_PORT=8080
    -e VIRTUAL_PROTO=http
    ...

@wader
Copy link

wader commented May 3, 2016

Thanks, anything interesting in docker network inspect $(docker network ls -q)?

@benzht
Copy link

benzht commented May 3, 2016

machine1 and machine2 have revprox-environment variables set

[
    {
        "Name": "bridge",
        "Id": "12562cb7079b3b4061e12545ac7f795a2f8954f7f40a16c1525d77be890de2cf",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "2802bd663583505b77370c9088403f2bfee45991a62d1258dfd835659ac5b857": {
                "Name": "machine1",
                "EndpointID": "1f726b852ff0d3e077877697f9162ec48557de77373804f052f80171dde12562",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "633c10d347a82f5d1f0f8af0ab15fa48913735b6f05f307c37c0a7a473214e1a": {
                "Name": "machine2",
                "EndpointID": "c8ae9dbc77862d79f4217755892f96dc294896f4663b0f00fa82a8367c7f9263",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ab875c71a13a435c9e152c5464dcb567475057405dc9ab6e5c9941d57d854b56": {
                "Name": "pg",
                "EndpointID": "1303f351679a69ab05c9bc9947f28f165d01a1841545b1416a914b4f2e4266a8",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "bcb350d31152ed4cae3ae50226c38650f2b47d91f709664d0e05e36d7e8abe6c": {
                "Name": "nginx",
                "EndpointID": "f0ac2c422780598843209dffdb0b89d7c7ae7baa821d0547bba7b4acc5605773",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    },
    {
        "Name": "host",
        "Id": "2da3ced6d4504489e820f0fb5353cd01adfd7000804a687dba6c1424bd5c17c4",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    },
    {
        "Name": "none",
        "Id": "069bd7fedaf4ce732eb5e9ac645d995befd24aa4aa91828116c49555ad3ea9a5",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

@wader
Copy link

wader commented May 3, 2016

Weird, i run a setup with nginx-proxy and some container on the same bridge network that works. Only difference i can see is that you have a "Internal": false.
Im using:
Docker version 1.10.3, build 20f81dd
Latest nginx-proxy

No port expose changes? try exact same docker-gen version (0.7.0 i think), does single container version work?

@benzht
Copy link

benzht commented May 3, 2016

I'm not running the nginx-proxy itself (and right now cannot test if it would work with it), but I am using a vanilla nginx with docker-gen and the template from nginx-proxy..as described in the documentation. Later this afternoon I will be able to test a vanilla nginx-proxy

@Bre77
Copy link

Bre77 commented May 3, 2016

I had this problem on nginx-proxy, so then tried setting up vanilla nginx and docker-gen, had the same problem again. By reverting back to a older template (commit 97c6340) it started working again.

@kalbasit
Copy link

kalbasit commented May 3, 2016

It's also not working for me. I've uploaded all of my .service files and all the info you need on this gist.

@rparree
Copy link

rparree commented May 4, 2016

Just to confirm: 0.3.0 does not work for me neither. Reverting back to 0.2.0 works. Same problems (not registering upstream servers, error on refresh)

  • Docker version 1.9.1, build ee06d03/1.9.1
  • Linux hprp 4.4.8-300.fc23.x86_64
  • Fedora 23 (Workstation Edition)

Docker Networking:

[
    {
        "Name": "host",
        "Id": "bb960310aaa58c288cfe385a11588507f595da69064d05392c5a572d6eac085b",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "bridge",
        "Id": "de075602f22cfd005a9c336b12eb5bd1425d2cc47c20cf51d2e0e5242e6925ce",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "f93eea65f260c3877e4fc3f926bc9eaecab6862c71a37ca9060f495ada7ee29a",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]

@malfario
Copy link

malfario commented May 4, 2016

Same issue here on CoreOS stable (899.17.0). Had to revert from 0.3 -> 0.2 because of empty upstream entries:

upstream www.xxxx.net {
}
server {
    server_name www.xxxx.net;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://www.xxxx.net;
    }
}

Docker version info:

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Docker networking info:

[
    {
        "Name": "bridge",
        "Id": "7d2d9fe9c3a113f5460e1a4f3cf55e228c18420eae3b06913d8698db3bbee30a",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "0608f72051f2625fff2051458aede50c2bb363004e36d504f40cea0282714176": {
                "EndpointID": "a1906d65db9d1028914188a48be0d4e32371cea7da814a2c1f363eab3d6b1a00",
                "MacAddress": "02:42:ac:11:00:0b",
                "IPv4Address": "172.17.0.11/16",
                "IPv6Address": ""
            },
            "13f0d19ef88321cdb447ae9efecc23232c28558dc2bfcceae05813fb7262d3e8": {
                "EndpointID": "b44764903a59eed598a59aebe9f13e73aa5e78569c7e563bdcfb22956a3fe934",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "16ee990232a08a359b004f96d92dd837bfd9eedc2408613a839b204015e6091b": {
                "EndpointID": "0d58e894448b9a27884d2d5b15012233aa8fc1956cf47e9c81cc6b37bcbd11b0",
                "MacAddress": "02:42:ac:11:00:0c",
                "IPv4Address": "172.17.0.12/16",
                "IPv6Address": ""
            },
            "32c16a8b0336d26a6cb14a3fb65dd545e08a9fc28cfd1fa61afe1a716a640b11": {
                "EndpointID": "818afd4a0c8d09bc7cc632173481d981d06c718b80c0753ccafb2361048c3791",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "494826356d693a2fa2b6fdaed6a6507b2271dfb721356c341910a8e63d676380": {
                "EndpointID": "24cb57366e2a86b79e00f59ba7ddcdcafa4fc70b21a057c5f999c9798d3a5227",
                "MacAddress": "02:42:ac:11:00:08",
                "IPv4Address": "172.17.0.8/16",
                "IPv6Address": ""
            },
            "6390314b4c9841bc25e0c3937b0d5795b59aa37652fc415813f17f512653c2d0": {
                "EndpointID": "6711a1c4755d91706aac172d3e73fb799bf05afa007ccb7e81c9228f986e36bb",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "6fbb225dc86637599946f41cbdad6130e0019e3dffe2a711586012b496098552": {
                "EndpointID": "b91532b65a946e5548ba9d4c94d32fc4539376e997cebd7b10826b97d210a50f",
                "MacAddress": "02:42:ac:11:00:07",
                "IPv4Address": "172.17.0.7/16",
                "IPv6Address": ""
            },
            "77e1ca97d4fb3b0e1c88a5adbc21e652848bf94e540a0f23157c382114a4246a": {
                "EndpointID": "178d5d9da05a37fa33371d8476981674b112abf939d172b357f6b045b1d1c4ca",
                "MacAddress": "02:42:ac:11:00:06",
                "IPv4Address": "172.17.0.6/16",
                "IPv6Address": ""
            },
            "7c6610b7057f6e0740e33409a3f0927b954b8cec29ac045565ffabf2d126a862": {
                "EndpointID": "e89ff27a9c17656f742a24fdeafc13ad4211cb3d999ba3c9d36c28264e855b0d",
                "MacAddress": "02:42:ac:11:00:09",
                "IPv4Address": "172.17.0.9/16",
                "IPv6Address": ""
            },
            "8a3f0be58a33613d5cd451830cb1d67909cea9d507a7026bbe8412558c78e10a": {
                "EndpointID": "22d51e699719c200ab5b6f218aa0244f6530ff8023c11c7ab2a4d621d42e8e71",
                "MacAddress": "02:42:ac:11:00:0a",
                "IPv4Address": "172.17.0.10/16",
                "IPv6Address": ""
            },
            "9704851a228de372e2b214c53160cd4d7d0227ec39fd8e24c0282388073dd769": {
                "EndpointID": "52f5039611d43a6a81a4498ae1bf989fe6fe56b9dcd7e1d4a96980818fa5850e",
                "MacAddress": "02:42:ac:11:00:0d",
                "IPv4Address": "172.17.0.13/16",
                "IPv6Address": ""
            },
            "b012433aa5872343955aba58fe7a401cbec9de5db39cb63c77a3fa80d03d786b": {
                "EndpointID": "a8bee2ee0f03ad11f1d3ceab066b2c8954cd7ef8f479d04a4f8886d5d1c00d9b",
                "MacAddress": "02:42:ac:11:00:0e",
                "IPv4Address": "172.17.0.14/16",
                "IPv6Address": ""
            },
            "c34a8c138f62ea0c673ef6978aa81b208002ca2ff435923673b097932a46375f": {
                "EndpointID": "072d897d91604623cce3ab445378b48001517fd52d4bc8ea94d1ac015b95fb1b",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "ed532e0529419539a441f7532c816a50a22c4b5498225e235492120a11898414": {
                "EndpointID": "fe35242022cdf93489804080984c9bba3bf46a488e5e0c213be8c6de5485ee13",
                "MacAddress": "02:42:ac:11:00:0f",
                "IPv4Address": "172.17.0.15/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    },
    {
        "Name": "none",
        "Id": "50b976a565ecd34e7df5ee4654a78df43624911f287fa76d2fc89d6b2963daa4",
        "Scope": "local",
        "Driver": "null",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    },
    {
        "Name": "host",
        "Id": "0436162c23c5f2ddc73fdac5d453982a7ece8bd0161cf97d3c2b40b8eaf53717",
        "Scope": "local",
        "Driver": "host",
        "IPAM": {
            "Driver": "default",
            "Config": []
        },
        "Containers": {},
        "Options": {}
    }
]

@ginkel
Copy link

ginkel commented May 4, 2016

Just an (untested) hypothesis: Could it be that only containers attached to the default bridge are affected?

@wader
Copy link

wader commented May 4, 2016

Tried to reproduce but no go. But ended up with a command to dump template context that might be useful. But remember to clear out secrets if your going to post the output!

docker exec <nginx-proxy-container-id> bash -c "docker-gen <(echo '{{json (dict \".\" $ \"Env\" .Env \"Docker\" .Docker)}}')"

Pipe thru jq . etc for nicer output

@ginkel
Copy link

ginkel commented May 4, 2016

I did some debugging. $CurrentContainer is undefined in nginx.tmpl.

@wader
Copy link

wader commented May 4, 2016

@ginkel can you dump context and also see how /proc/self/cgroup looks? that's what docker-gen uses for CurrentContainerID

@wader
Copy link

wader commented May 4, 2016

I wonder if nginx-proxy/docker-gen#186 could be the cause of some of these problems

@ginkel
Copy link

ginkel commented May 4, 2016

$ cat /proc/self/cgroup                                                       
9:perf_event:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
8:blkio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
7:net_cls,net_prio:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
6:freezer:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
5:devices:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
4:memory:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
3:cpu,cpuacct:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
2:cpuset:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37
1:name=systemd:/docker/0cda9a6198b82c0d2951f584485a385c990a8101f1dada02baefcf7abd96cb37

I can dump the context, but it contains loads of secrets. What do you need to know?

@wader
Copy link

wader commented May 4, 2016

@ginkel Oh ah yeah. I guess the useful stuff would be .Docker.CurrentContainerID and the container IDs. Does one of them match up or how does the container look that should match up

@schmunk42
Copy link
Contributor

I added some info why empty upstreams can occur: #565 (comment)

@vladkras
Copy link

vladkras commented May 16, 2017

TL;DR
I had same problem no servers are inside upstream in /etc/nginx/conf.d/default.conf
fixed by restarting nginx and php container, and then nginx -s reload inside container (not sure it didn't reload itself) so now it's not empty again:

upstream example.com {
                                ## Can be connect with "your_network" network
                        # your_nginx_1
                        server 172.20.0.2:80;
}

@Kugelschieber
Copy link

Kugelschieber commented Jun 4, 2017

I can confirm that upstream is empty. As a workaround, I mounted conf.d to a volume and edited default.conf manually.

@kalbasit
Copy link

kalbasit commented Jun 4, 2017

@DeKugelschieber use https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl instead of the one on master and you'll be fine. Below is my nginx-gen.service if it helps

[Unit]
Description=Automatically generate nginx configuration for serving docker containers
Requires=docker.service nginx.service
After=docker.service nginx.service

[Service]
ExecStartPre=/bin/sh -c "rm -f /tmp/nginx.tmpl && curl -Lo /tmp/nginx.tmpl https://raw.githubusercontent.com/jwilder/nginx-proxy/a72c7e6e20df3738ca365bf6c14598f6a8017500/nginx.tmpl"
ExecStartPre=/bin/sh -c "docker inspect nginx-gen >/dev/null 2>&1 && docker rm -f nginx-gen || true"
ExecStartPre=/usr/bin/docker create --name nginx-gen --volumes-from nginx -v /tmp/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
ExecStart=/usr/bin/docker start -a nginx-gen
ExecStop=-/usr/bin/docker stop nginx-gen
ExecStopPost=/usr/bin/docker rm -f nginx-gen
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

@Johannestegner
Copy link

I had this issue and tried all tips and tricks everywhere.
What fixed it for me was to add listen 443; in my nginx server block, without this, my config file didn't get any IP registered for the container.
Not sure if this provides any help, but thought I'd post a comment about it.

@Pimmetje
Copy link

Just to put my 2 cents in the bucket. I modified the template like this:

server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

https://github.com/jwilder/nginx-proxy/blob/6bdd184d6abaebfbb6f1d28593a897a96ee020c4/nginx.tmpl#L118

It requires me atm to give also the VIRTUAL_PORT but it does work. Full block where i added the line shown below. I did not try to get it any nicer but if someone knows a permanent fix i am all ears.

For now this works as a workaround.

upstream {{ $upstream_name }} {

{{ range $container := $containers }}
        {{ $addrLen := len $container.Addresses }}

        server {{ $container.IP }}:{{ $container.Env.VIRTUAL_PORT }};

        {{ range $knownNetwork := $CurrentContainer.Networks }}
                {{ range $containerNetwork := $container.Networks }}
                        {{ if or (eq $knownNetwork.Name $containerNetwork.Name) (eq $knownNetwork.Name "host") }}
                                ## Can be connect with "{{ $containerNetwork.Name }}" network

                                {{/* If only 1 port exposed, use that */}}
                                {{ if eq $addrLen 1 }}
                                        {{ $address := index $container.Addresses 0 }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var, falling back to standard web port 80 */}}
                                {{ else }}
                                        {{ $port := coalesce $container.Env.VIRTUAL_PORT "80" }}
                                        {{ $address := where $container.Addresses "Port" $port | first }}
                                        {{ template "upstream" (dict "Container" $container "Address" $address "Network" $containerNetwork) }}
                                {{ end }}
                        {{ end }}
                {{ end }}
        {{ end }}
{{ end }}
}

@tht
Copy link

tht commented Aug 9, 2017

I've just hit the same issue when trying to add a backend which only publishes port 443. It ends up with an empty upstream block.

I've used the following workaround:

  • Create a new network (type: bridge)
  • Add the reverse proxy and the backend to this new network
  • Restart the backend (this generates a new configuration file for nginx

As long as reverse-proxy and the backends do share the same network it seems to work perfectly fine.

@revolunet
Copy link

same as #479 ?

looks like as soon as some container with VIRTUAL_HOST isnt on the same network, it breaks the nginx config and container ?

@vladkras
Copy link

@revolunet yes you are right, in all my later cases proxy container and nginx container were in different networks. Adding one to another
docker network connect container_1_network container_2
with subsequent restart of both of them helps. But I'm still not sure if I have to add nginx to proxy network or vice versa (as docs offers). Both solutions work and fail sometime.

@schmunk42
Copy link
Contributor

Both solutions work and fail sometime.

For those running this in a swarm, make sure to check if you are still receiving docker events. If you are not seeing any events the nginx will not restart when containers are created or removed.

Laski pushed a commit to Laski/nginx-proxy that referenced this issue Dec 18, 2017
@william-oicr
Copy link

william-oicr commented Dec 21, 2017

I had this error as well. This was my docker-compose.yml

version: '3'
services:
  proxy:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
     - "80:80"
     - "443:443"
    volumes:
     - /var/run/docker.sock:/tmp/docker.sock
     - ./certs:/etc/nginx/certs:ro
    image: proxy:docker
    container_name: proxy
networks:
  default:
    external:
      name: nginx-proxy

I fixed it by adding - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl to volumes

@mnabialek
Copy link

I have same problem on MacOS, never seen it on Windows 10

@closedLoop
Copy link

I had the same issue as above. I found the docker-compose files in https://blog.ssdnodes.com/blog/tutorial-using-docker-and-nginx-to-host-multiple-websites/ to fix my issue. It appears it was how I was defining environment variables in the improper format

@pbreah
Copy link

pbreah commented Jan 5, 2018

experiencing this same issue on AWS ECS.

Has anyone fixed this on ECS?

ecs-cli compose service up - the client that brings up the services doesn't support "networks" and it skips these from the docker-compose.yml file. Any solutions for ECS?

By default it uses a bridge network, but it still gets the empty upstream.

@shikasta-net
Copy link

I have been experiencing the "empty upstream" issue for a long time and spent the last week doing some extensive debugging. In my case the entire problem stems from {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }} being empty, exactly as was discussed about half way up this issue. What's different is that mine sometimes works.
My containers are all started via systemd-docker. Most of the time at starts up, nginx-proxy has no concept of its own container and the upstream blocks are empty. Occasionally the stars align and it starts, knowing about its container and everything works. I thought the issue was a service dependency on docker, the network or something that nginx-proxy needed running before the service started, but I have found that if I systemctl stop docker.proxy.service and systemctl start docker.www.service, nginx-proxy has maybe a 20% chance of not knowing about its container. Hopefully someone can direct me to a way to further diagnose what is occasionally preventing the container detecting itself during creation and thereby help fix this ongoing issue.
Below are the relevant systemd.units. I'm running docker 1.13.1, systemd 229, nginx-proxy:latest (b0bb7ac158f6), letsencrypt-nginx-proxy-companion:latest (7d559ca951b3).

docker.proxy.service

[Unit]
Description=Proxying Container
After=zfs.target docker.service network-online.target
Before=docker.letsencrypt.service
Requires=zfs.target docker.service network-online.target docker.letsencrypt.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  -v /log/proxy:/var/log/nginx \
  -v /proxy/conf/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro \
  -v /proxy/certs:/etc/nginx/certs:ro \
  -v /proxy/vhost.d:/etc/nginx/vhost.d \
  -v /proxy/html:/usr/share/nginx/html \
  -p 443:443 \
  -p 80:80 \
  jwilder/nginx-proxy

[Install]
WantedBy=multi-user.target

docker.letsencyrpt.service

[Unit]
Description=Automatic SSL certification Container
After=zfs.target docker.proxy.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  --volumes-from docker.proxy.service \
  -v /proxy/certs:/etc/nginx/certs:rw \
  jrcs/letsencrypt-nginx-proxy-companion

[Install]
WantedBy=multi-user.target

docker.www.service

[Unit]
Description=Place holder page Container
After=zfs.target docker.proxy.service docker.letsencrypt.service
Requires=zfs.target docker.proxy.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/systemd-docker run --rm --name %n \
  -v /log/website:/var/log/nginx \
  -v /website/content:/usr/share/nginx/html:ro \
  -e "VIRTUAL_HOST=www.example.com" \
  -e "LETSENCRYPT_HOST=www.example.com" \
  -e "LETSENCRYPT_EMAIL=me@example.com" \
  nginx

[Install]
WantedBy=multi-user.target

(NB host names, email and paths have been obfuscated in these samples)

@Calder-Ty
Copy link

I know this is an old issue, but for me to fix this all I had to do was ensure that my application and Nginx were on the same network. (I use two different compose files, one for Nginx/Docker gen/letsencrypt and one for my web-app). I know that is a fairly dumb thing to forget, but I wanted to put it out there for anyone else who might be reading through this.

@ryanalexanderson
Copy link

...Also in the realm of silly mistakes, I had a boilerplate of environment variables being used in unrelated docker-compose files on the same machine that unnecessarily defined VIRTUAL_HOST on a different network. They interfered with the real VIRTUAL_HOST in the correct network/docker-compose. The "docker network inspect $(docker network ls -q)" command tipped me off.

@jrd
Copy link

jrd commented Jul 8, 2018

Thank you ryanalexanderson ! That was my problem: i forgot to put one container in the right network.

@wimh
Copy link

wimh commented Oct 19, 2018

I had the same problem, in my case it was related to systemd-docker. I explain the problem in case someone else has the same problem. (@shikasta-net?)

by default, systemd-docker will move all application cgroups to systemd. This will cause /proc/self/cgroup to look something like this:

10:devices:/system.slice/docker-gen-debug.service
9:blkio:/system.slice/docker-gen-debug.service
8:memory:/system.slice/docker-gen-debug.service
7:freezer:/
6:perf_event:/
5:cpuset:/
4:cpu,cpuacct:/system.slice/docker-gen-debug.service
3:pids:/system.slice/docker-gen-debug.service
2:net_cls,net_prio:/
1:name=systemd:/system.slice/docker-gen-debug.service

But docker-gen uses /proc/self/cgroup to find .Docker.CurrentContainerID, which will obviously fail this way. systemd-docker has a commandline option to select which cgroups to move. According to the documentation, at least name=systemd has to be moved. But as long as at least one cgroup is not moved, that cgroup will be left at docker and contain the container ID. This example moves all cgroups except cpuset:

ExecStart=/usr/local/sbin/systemd-docker --cgroups name=systemd \
    --cgroups=net_cls --cgroups=pids --cgroups=cpu --cgroups=perf_event \
    --cgroups=freezer --cgroups=memory --cgroups=blkio --cgroups=devices \
    run -d --name %n \
    ....
    jwilder/docker-gen \
    ....

I use docker-gen, but this would also apply to nginx-proxy.

Note that this problem only occurred when the container was directly started from systemd. After using the docker commands to restart the container, container ID could be found. Also, when using docker exec to inspect the container, /proc/self/cgroup looked fine, because the exec uses a different process.

@mastef
Copy link

mastef commented Dec 14, 2018

In my case this happened because I started a container with docker-compose up that had VIRTUAL_HOST=xxx defined in the env vars. However since docker-compose creates a new network, this container wasn't reachable by the jwilder/nginx container ( which was started separately ).

It couldn't fetch the ip address for this container, created an empty value for this VIRTUAL_HOST domain and then failed with the error message no servers are inside upstream.

Shutting down the new container with docker-compose down and restarting the proxy brought everything back to life.

@Xachman
Copy link

Xachman commented Apr 25, 2019

I had to use network_mode: "bridge" in my docker-compose.yml to fix this error.

@schoblaska
Copy link

schoblaska commented Dec 26, 2020

I fixed this by stopping the container and removing the volume that contained the config (docker volume rm nginx_conf, but docker volume ls | grep nginx to find the name of the volume on your machine). After that, I was able to start the container normally and everything worked again.

@tkw1536
Copy link
Collaborator

tkw1536 commented Apr 10, 2022

Solved by recreating containers. If the problem persists please reopen.

@tkw1536 tkw1536 closed this as completed Apr 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests