Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error starting containers in 1.7.1 - could not find bridge docker0: no such network interface #14738

Closed
BenHall opened this issue Jul 19, 2015 · 30 comments
Assignees
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
Milestone

Comments

@BenHall
Copy link
Contributor

BenHall commented Jul 19, 2015

Hello,

Since upgrading to 1.7.1 I'm getting two new errors when I launch containers. The containers are being launched via the API, no code changed in-between the releases and these errors didn't occur in 1.7.0.

Sometimes the containers launch successfully, other times they result in the error below. The docker0 interface does exist.

Any suggestions?

Ben

Failed { [Error: HTTP code is 404 which indicates error: no such container - Cannot start container afd07694493a806fde9b30f01640de9b626777dbdd8a8351081359639caa43f7: adding interface veth014cfbc to bridge docker0 failed: could not find bridge docker0: no such network interface
]
  reason: 'no such container',
  statusCode: 404,
  json: 'Cannot start container afd07694493a806fde9b30f01640de9b626777dbdd8a8351081359639caa43f7: adding interface veth014cfbc to bridge docker0 failed: could not find bridge docker0: no such network interface\n' }
Failed { [Error: HTTP code is 404 which indicates error: no such container - Cannot start container afd07694493a806fde9b30f01640de9b626777dbdd8a8351081359639caa43f7: adding interface veth7fb1fbc to bridge docker0 failed: no such device
]
  reason: 'no such container',
  statusCode: 404,
  json: 'Cannot start container afd07694493a806fde9b30f01640de9b626777dbdd8a8351081359639caa43f7: adding interface veth7fb1fbc to bridge docker0 failed: no such device\n' }

The network settings for the container looks like this:

    "NetworkSettings": {
        "Bridge": "",
        "EndpointID": "",
        "Gateway": "",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "HairpinMode": false,
        "IPAddress": "",
        "IPPrefixLen": 0,
        "IPv6Gateway": "",
        "LinkLocalIPv6Address": "",
        "LinkLocalIPv6PrefixLen": 0,
        "MacAddress": "",
        "NetworkID": "",
        "PortMapping": null,
        "Ports": null,
        "SandboxKey": "",
        "SecondaryIPAddresses": null,
        "SecondaryIPv6Addresses": null
    },

Machine information:

# docker info
Containers: 1981
Images: 279
Storage Driver: aufs
 Root Dir: /home/docker/data/aufs
 Backing Filesystem: extfs
 Dirs: 4245
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-18-generic
Operating System: Ubuntu 15.04
CPUs: 8
Total Memory: 94.41 GiB
Registry: https://index.docker.io/v1/
# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
# uname -a
Linux 3.19.0-18-generic #18-Ubuntu SMP Tue May 19 18:31:35 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
@BenHall
Copy link
Contributor Author

BenHall commented Jul 19, 2015

This is the results from launching a stopped container via the CLI

# docker start docker-warden
Error response from daemon: Cannot start container docker-warden: adding interface veth0d07a10 to bridge docker0 failed: could not find bridge docker0: no such network interface
Error: failed to start containers: [docker-warden]
# docker start docker-warden
Error response from daemon: Cannot start container docker-warden: adding interface veth744d52c to bridge docker0 failed: no such device
Error: failed to start containers: [docker-warden]
# docker start docker-warden
Error response from daemon: Cannot start container docker-warden: could not set link down for container interface veth2fcc604: no such device
Error: failed to start containers: [docker-warden]
# docker start docker-warden
docker-warden

@ggtools
Copy link

ggtools commented Jul 20, 2015

Same problem here on two servers

BUG REPORT INFORMATION

Description of problem:

Randomly, Docker cannot start a container with an error message similar to this one:

Cannot start container <containerid>: adding interface veth<xxx> to bridge docker0 failed: could not find bridge docker0: no such network interface

Try to start enough time will solve the issue until next time.

docker version:

Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

docker info (server A):

Containers: 38
Images: 2193
Storage Driver: aufs
 Root Dir: /data/docker/daemon/aufs
 Backing Filesystem: extfs
 Dirs: 2269
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-57-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 31.26 GiB
Name: sd-80693
ID: OX2K:MA46:ZUW2:BTP6:6OX5:K3HE:43TQ:Q7WI:JHWQ:7D6O:QLJF:YQ5C

docker info (server B):

Containers: 19
Images: 883
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 962
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-43-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 15.63 GiB
Name: db-002.labouisse.com
ID: CTLK:BOG7:FXHI:J5NQ:K6VZ:OY2X:GTVW:JPXP:WYK7:63NN:Z4LY:FGEH

uname -a (server A):

Linux XXX 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

uname -a (server B):

Linux YYY 3.16.0-43-generic #58~14.04.1-Ubuntu SMP Mon Jun 22 10:21:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Environment details (AWS, VirtualBox, physical, etc.):

Physical server in both cases.

How reproducible:

Randomly, seems to happen more often with the number of running containers growing.

Steps to Reproduce:

  1. start many containers
  2. start more containers
    3.now it should have failed but don't hesitate to redo step 2 if needed.

Actual Results:

At one point a container cannot be started with the Cannot start container xxx: adding interface vethyyy to bridge docker0 failed: could not find bridge docker0: no such network interface error message

Expected Results:

Container should start

Additional info:

Didn't happen on 1.7.0. Also happens on 1.8.0-dev

> docker version
Client:
 Version:      1.8.0-dev
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   8c7cd78
 Built:        Tue Jul 14 23:47:18 UTC 2015
 OS/Arch:      linux/amd64
 Experimental: true

Server:
 Version:      1.8.0-dev
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   8c7cd78
 Built:        Tue Jul 14 23:47:18 UTC 2015
 OS/Arch:      linux/amd64
 Experimental: true

@mrjana
Copy link
Contributor

mrjana commented Jul 20, 2015

@BenHall @ggtools I tried the same test on my single cpu Virtualbox Ubuntu VM on my laptop

Linux dev-1 3.16.0-23-generic #31-Ubuntu SMP Tue Oct 21 17:56:17 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

and a DO 2 cpu Ubuntu VM

Linux ub2cpu 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

and started many containers in both and could not reproduce this problem. What exact configuration in terms of CPU etc. are you guys running?

And it is surprising that this kind of issue is popping up because there is no way the daemon would have started if the docker0 bridge is not present and the daemon could not create it.

@BenHall
Copy link
Contributor Author

BenHall commented Jul 20, 2015

Hello,

As requested. This is a physical box, where are CPU / network details. Looking at the Weave ticket and my Docker use-case, it might be something to do with load / number of containers running (weaveworks/weave#1188)

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 44
Model name:            Intel(R) Xeon(R) CPU           E5606  @ 2.13GHz
Stepping:              2
CPU MHz:               1200.000
CPU max MHz:           2133.0000
CPU min MHz:           1200.0000
BogoMIPS:              4266.84
Virtualisation:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-3
NUMA node1 CPU(s):     4-7
# lspci | egrep -i --color 'network|ethernet'
02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

@mrjana
Copy link
Contributor

mrjana commented Jul 20, 2015

Thanks @BenHall for the additional info. @ggtools can you also please provide similar information as it may help us in quick resolution of the issue

@ggtools
Copy link

ggtools commented Jul 20, 2015

Server A

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Stepping:              3
CPU MHz:               800.000
BogoMIPS:              6784.74
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7
# lspci | egrep -i 'network|ethernet'
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

Server B

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 42
Stepping:              7
CPU MHz:               3200.023
BogoMIPS:              6185.26
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-3
# lspci | egrep -i 'network|ethernet'
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

@rojaro
Copy link

rojaro commented Jul 21, 2015

I just had the same problem with 1.7.1. After downgrading to 1.7.0 the problem disappeared. Looking at the changes between 1.7.0 and 1.7.1 (0baf609...786b29d) i believe commit 34815f3 is likely to blame as it is the only bridge related code that has been changed between the releases.

@bprodoehl
Copy link

I'm also observing the same problem with 1.7.1. docker0 definitely existed, and I was creating containers through the API. The containers created eventually die of their own accord, and are removed through the API. docker0 had 17 adapters according to brctl, and ifconfig confirmed those adapters were still around, but there were only 8 running containers. After a restart of the docker daemon, and with no running containers, docker0 still had 7 veth devices in it. I rebooted, and started the same containers that were running originally, and brctl showed 8 adapters. So it looks like we're also leaking veth devices. Here is my system info:

# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
# docker info
Containers: 18
Images: 1323
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 1404
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-57-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 2
Total Memory: 1.947 GiB
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    2
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Stepping:              4
CPU MHz:               2800.078
BogoMIPS:              5600.15
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              25600K
NUMA node0 CPU(s):     0,1
# uname -a
Linux XXX 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Here are my create options:

var create_options = { HostConfig: { PublishAllPorts: true,
                                       Privileged: true,
                                       Dns: ['8.8.8.8', '8.8.4.4'],
                                       Binds: ['/var/log/whatever:/var/log/whatever'],
                                       LogConfig: { 'Type': 'none'},
                                       ReadonlyRootfs: false },
                         Env: [ 'TOKEN='+token] };

@mrjana
Copy link
Contributor

mrjana commented Jul 21, 2015

@bprodoehl When you said you restarted docker daemon how did you restart it? Did you kill it or did you just usesystemctl or service commands?

@bprodoehl
Copy link

@mrjana service docker restart.

@calavera calavera added area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. labels Jul 22, 2015
@BenHall
Copy link
Contributor Author

BenHall commented Jul 23, 2015

I also appear to be leaking a lot of interfaces - https://gist.github.com/BenHall/4a4e42575dd29d7b669b

Box only has 9 containers running

@pospispa
Copy link
Contributor

I'm experiencing the same problem.
I tried to set network mode to bridge and it seems that this workaround works for me.

@mavenugo
Copy link
Contributor

@BenHall @ggtools @pospispa we added a few fixes for 1.7.1 to solve centos/RHEL 6.6 issues reported under #14024 by replacing some of the unsupported netlink calls with ioctl. Since am are unable to reproduce the issue and the issue being inconsistent & basic (existing docker0 bridge interface is not returned in the netlink call on some cases), we feel that it could be some kernel issue got exposed by these changes.

I added a quick fix in 1.7.1 branch to confirm the above theory. Would you be willing to test a docker binary (based on 1.7.1) which contains a possible fix (https://gist.githubusercontent.com/mavenugo/b68e24be97eeaf9d0eef/raw/ba48905331ed367589d00e91beb1ff817ab73d69/gistify818322.java) for this issue.

@BenHall
Copy link
Contributor Author

BenHall commented Jul 27, 2015

@mavenugo sure, where's the build located?

The issue increased in occurrences until we rebooted the server.

@mavenugo
Copy link
Contributor

@BenHall Thanks. uploaded it to box. https://app.box.com/s/74nbptdb58ff00krilwjxqkvrcg2pek1
Pls give it a spin.

@27Bslash6
Copy link

I had the same issue, and after replacing my docker 1.7.1 with the above executable, I can now bring up new containers reliably without rebooting the server.

Will continue to monitor.

# docker info
docker info                                                                                1 ↵
Containers: 15
Images: 259
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 289
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 2
Total Memory: 1.958 GiB
Name: ######
ID: T3EC:GIZT:AJWN:NEVX:DUBS:HGHQ:FE74:HMUZ:HT4I:AANJ:IRWN:3V6X
WARNING: No memory limit support
WARNING: No swap limit support

# uname -a
Linux bean.raywalker.it 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt11-1~bpo70+1 (2015-06-08) x86_64 GNU/Linux

@BenHall
Copy link
Contributor Author

BenHall commented Jul 28, 2015

I've deployed it and will let you know. Will this fix be in 1.8?

@BenHall
Copy link
Contributor Author

BenHall commented Jul 28, 2015

Still having an issue but looks like a different error. I create the containers as a separate action which didn't error. This error occurred when I attempted to start the container.

Cannot start container 9ea378320f5245cd3ad83fe4606dd1c65c89a6c2ced3e6b602ff218aebb2a7e8: could not set link up for host interface vethbf17ff1: no such device
$ docker --version
Docker version 1.7.1, build 786b29d-dirty

$ docker info
Containers: 1380
Images: 272
Storage Driver: aufs
 Root Dir: /home/docker/data/aufs
 Backing Filesystem: extfs
 Dirs: 3038
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-18-generic
Operating System: Ubuntu 15.04
CPUs: 8
Total Memory: 94.41 GiB
Registry: https://index.docker.io/v1/

$ docker inspect 9ea378320f5245cd3ad83fe4606dd1c65c89a6c2ced3e6b602ff218aebb2a7e8
"State": {
        "Running": false,
        "Paused": false,
        "Restarting": false,
        "OOMKilled": false,
        "Dead": false,
        "Pid": 0,
        "ExitCode": 128,
        "Error": "could not set link up for host interface vethbf17ff1: no such device",
        "StartedAt": "0001-01-01T00:00:00Z",
        "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Image": "6f5a8577e11827e50799fc77d1b0ad2597bdcf60c5fedaaeac0d2de2e8c41a2f",
    "NetworkSettings": {
        "Bridge": "",
        "EndpointID": "",
        "Gateway": "",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "HairpinMode": false,
        "IPAddress": "",
        "IPPrefixLen": 0,
        "IPv6Gateway": "",
        "LinkLocalIPv6Address": "",
        "LinkLocalIPv6PrefixLen": 0,
        "MacAddress": "",
        "NetworkID": "",
        "PortMapping": null,
        "Ports": null,
        "SandboxKey": "",
        "SecondaryIPAddresses": null,
        "SecondaryIPv6Addresses": null
    },




@mavenugo
Copy link
Contributor

@BenHall Thanks for the confirmation. This is a different issue and it is strange. moby/libnetwork#350. was added specifically to address the txlqen issue which seems to be failing in your case.

@mrjana do you have any idea ?

@ggtools
Copy link

ggtools commented Jul 28, 2015

Works for me with 1.7.1, build 786b29d-dirty

@mavenugo
Copy link
Contributor

@ggtools thanks for the confirmation. As I mentioned, the fix : https://gist.githubusercontent.com/mavenugo/b68e24be97eeaf9d0eef/raw/ba48905331ed367589d00e91beb1ff817ab73d69/gistify818322.java is nothing more than trying to use netlink API first to create and program the bridge and in case of failure, fallback to ioctl call.
This confirms a possible kernel issue when using both netlink and ioctl calls to manage the interface/bridge.

@ggtools
Copy link

ggtools commented Jul 28, 2015

@mavenugo yes I noticed and as you may noticed if this is a kernel bug this will affect both 3.13.0 & 3.16.0, at least the Unbuntu flavor

@joestubbs
Copy link

Just for added confirmation - I was seeing this same error intermittently when trying to execute about 10 containers quickly (less than 500 ms) through the API running 1.7.1. So far, build 786b29d-dirty has fixed it for me. I even upped the executions an order of magnitude and so far so good.

@mavenugo
Copy link
Contributor

@joestubbs thanks for the additional confirmation. We will try and get this in for 1.8.0.

@mavenugo
Copy link
Contributor

@BenHall @mountkin helped fixing a possible leak issue moby/libnetwork#419. This fix + mine might help most of us here (including your issue). I would like to get these issues resolved asap & make it part of 1.8 RC, which you can try out-of-hours and give feedback. WDYT ?

@BenHall
Copy link
Contributor Author

BenHall commented Jul 29, 2015

@mavenugo Sure, sounds good. Can you link me to the build which you would like me to deploy...

@mavenugo
Copy link
Contributor

@BenHall if it helps, I can provide another private image with mine & moby/libnetwork#419 in place which you can try (after cleaning up the existing leaked veths). That will help a great deal. Can you make yourselves available in #docker-network IRC channel so that we can debug this live ?

@calavera calavera added this to the 1.8.0 milestone Jul 29, 2015
mavenugo added a commit to mavenugo/libnetwork that referenced this issue Jul 30, 2015
As seen in moby/moby#14738 there is
general instability in the later kernels under race conditions when ioctl
calls are used in parallel with netlink calls for various operations.
(We are yet to narrow down to the exact root-cause on the kernel).

For those older kernels which doesnt support some of the netlink APIs,
we can fallback to using ioctl calls. Hence bringing back the original
code that used netlink (moby#349).

Also, there was an existing bug in bridge creation using netlink which
was setting bridge mac during bridge creation. That operation is not
supported in the netlink library (and doesnt throw an error either).
Included a fix for that condition by setting the bridge mac after
creating the bridge.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
mavenugo added a commit to mavenugo/libnetwork that referenced this issue Jul 30, 2015
As seen in moby/moby#14738 there is
general instability in the later kernels under race conditions when ioctl
calls are used in parallel with netlink calls for various operations.
(We are yet to narrow down to the exact root-cause on the kernel).

For those older kernels which doesnt support some of the netlink APIs,
we can fallback to using ioctl calls. Hence bringing back the original
code that used netlink (moby#349).

Also, there was an existing bug in bridge creation using netlink which
was setting bridge mac during bridge creation. That operation is not
supported in the netlink library (and doesnt throw an error either).
Included a fix for that condition by setting the bridge mac after
creating the bridge.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
mavenugo added a commit to mavenugo/libnetwork that referenced this issue Jul 30, 2015
As seen in moby/moby#14738 there is
general instability in the later kernels under race conditions when ioctl
calls are used in parallel with netlink calls for various operations.
(We are yet to narrow down to the exact root-cause on the kernel).

For those older kernels which doesnt support some of the netlink APIs,
we can fallback to using ioctl calls. Hence bringing back the original
code that used netlink (moby#349).

Also, there was an existing bug in bridge creation using netlink which
was setting bridge mac during bridge creation. That operation is not
supported in the netlink library (and doesnt throw an error either).
Included a fix for that condition by setting the bridge mac after
creating the bridge.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
@calavera
Copy link
Contributor

fixed in #15185

@mavenugo
Copy link
Contributor

@BenHall both the veth leak and docker0 : no such interface issues are resolved and will be made available with the next 1.8 RC image. Pls give it a try when available.

fermayo added a commit to tutumcloud/support-docs that referenced this issue Aug 18, 2015
crawford pushed a commit to crawford/docker that referenced this issue Sep 16, 2015
As seen in moby#14738 there is
general instability in the later kernels under race conditions when ioctl
calls are used in parallel with netlink calls for various operations.
(We are yet to narrow down to the exact root-cause on the kernel).

For those older kernels which doesnt support some of the netlink APIs,
we can fallback to using ioctl calls. Hence bringing back the original
code that used netlink (moby/libnetwork#349).

Also, there was an existing bug in bridge creation using netlink which
was setting bridge mac during bridge creation. That operation is not
supported in the netlink library (and doesnt throw an error either).
Included a fix for that condition by setting the bridge mac after
creating the bridge.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
fermayo added a commit to tutumcloud/support-docs that referenced this issue Sep 17, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
Projects
None yet
Development

No branches or pull requests

10 participants