Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to remove a stopped container: device or resource busy #22260

Closed
pheuter opened this issue Apr 22, 2016 · 204 comments
Closed

Unable to remove a stopped container: device or resource busy #22260

pheuter opened this issue Apr 22, 2016 · 204 comments

Comments

@pheuter
Copy link

pheuter commented Apr 22, 2016

Apologies if this is a duplicate issue, there seems to be several outstanding issues around a very similar error message but under different conditions. I initially added a comment on #21969 and was told to open a separate ticket, so here it is!


BUG REPORT INFORMATION

Output of docker version:

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 51
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 81
 Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.676 GiB
Name: ip-10-1-49-110
ID: 5GAP:SPRQ:UZS2:L5FP:Y4EL:RR54:R43L:JSST:ZGKB:6PBH:RQPO:PMQ5
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

Running on Ubuntu 14.04.3 LTS HVM in AWS on an m3.medium instance with an EBS root volume.

Steps to reproduce the issue:

  1. $ docker run --restart on-failure --log-driver syslog --log-opt syslog-address=udp://localhost:514 -d -p 80:80 -e SOME_APP_ENV_VAR myimage
  2. Container keeps shutting down and restarting due to a bug in the runtime and exiting with an error
  3. Manually running docker stop container
  4. Container is successfully stopped
  5. Trying to rm container then throws the error: Error response from daemon: Driver aufs failed to remove root filesystem 88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e: rename /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0 /var/lib/docker/aufs/diff/a48629f102d282572bb5df964eeec7951057b50f21df7abe162f8de386e76dc0-removing: device or resource busy
  6. Restart docker engine: $ sudo service docker restart
  7. $ docker ps -a shows that the container no longer exists.
@dominikschulz
Copy link

Same here. Exact same OS, also running on AWS (different instance types) with aufs.

After stopping the container retrying docker rm several times and/or waiting a few seconds usually leads to "container not found" eventually. Issues exists in our stack at least since Docker 1.10.

@allencloud
Copy link
Contributor

suffered from this issue for quite long time.

@danielfoss
Copy link

Receiving this as well with Docker 1.10. I would very occasionally get something similar with 1.8 and 1.9 but it would clear up on it's own after a short time. With 1.10 it seems to be permanent until I can restart the service or VM. I saw that it may be fixed in 1.11 and am anxiously awaiting the official update so I can find out.

@cpuguy83
Copy link
Member

"Device or resource busy" is a generic error message.
Please read your error messages and make sure it's exactly the error message above (ie, rename /var/lib/docker/aufs/diff/...

"Me too!" comments do not help.

@danielfoss There are many fixes in 1.11.0 that would resolve some device or resource busy issues on multiple storage drivers when trying to remove the container.
1.11.1 fixes only a specific case (mounting /var/run into a container).

@cezarsa
Copy link
Contributor

cezarsa commented Aug 17, 2016

I'm also seeing this problem on some machines and by taking a look at the code I think the original error is being obscured in here: https://github.com/docker/docker/blob/master/daemon/graphdriver/aufs/aufs.go#L275-L278

My guess is that the Rename error is happening due to an unsuccessful call to unmount. However, as the error message in unmount is logged using Debugf we won't see it unless the daemon is started in debug mode. I'll see if I can spin some servers with debug mode enabled and catch this error.

@genezys
Copy link

genezys commented Aug 23, 2016

I tried to set my docker daemon in debug mode and got the following logs when reproducing the error:

Aug 23 10:49:58 vincent dockerd[14083]: time="2016-08-23T10:49:58.191330085+02:00" level=debug msg="Calling DELETE /v1.21/containers/fa781466a8117d690077d85cc06af025da1c9c9b13302b1efed65c21788d5a75?link=False&force=False&v=False"
Aug 23 10:49:58 vincent dockerd[14083]: time="2016-08-23T10:49:58.191478608+02:00" level=error msg="Error removing mounted layer fa781466a8117d690077d85cc06af025da1c9c9b13302b1efed65c21788d5a75: rename /var/lib/docker/aufs/mnt/007c204b5aa1708f628d9518bb83d51176446e0c3743587f72b9f6cde3b9ce24 /var/lib/docker/aufs/mnt/007c204b5aa1708f628d9518bb83d51176446e0c3743587f72b9f6cde3b9ce24-removing: device or resource busy"
Aug 23 10:49:58 vincent dockerd[14083]: time="2016-08-23T10:49:58.191519719+02:00" level=error msg="Handler for DELETE /v1.21/containers/fa781466a8117d690077d85cc06af025da1c9c9b13302b1efed65c21788d5a75 returned error: Driver aufs failed to remove root filesystem fa781466a8117d690077d85cc06af025da1c9c9b13302b1efed65c21788d5a75: rename /var/lib/docker/aufs/mnt/007c204b5aa1708f628d9518bb83d51176446e0c3743587f72b9f6cde3b9ce24 /var/lib/docker/aufs/mnt/007c204b5aa1708f628d9518bb83d51176446e0c3743587f72b9f6cde3b9ce24-removing: device or resource busy"

I could find the message Error removing mounted layer in https://github.com/docker/docker/blob/f6ff9acc63a0e8203a36e2e357059089923c2a49/layer/layer_store.go#L527 but I do not know Docker enough to tell if it is really related.

Version info:

Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:02:53 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:02:53 2016
 OS/Arch:      linux/amd64

@simkim
Copy link

simkim commented Aug 23, 2016

I had the same problem using docker-compose rm

Driver aufs failed to remove root filesystem 88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e

What I did to fix the problem without restarting docker :

cat /sys/fs/cgroup/devices/docker/88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e/tasks

It give you the pid of the processes which run in devices subsystem (what is mounted and busy) located in the hierarchy in /docker/:containerid:

I succeeded to kill them :
kill $(cat /sys/fs/cgroup/devices/docker/88189a16be60761a2c04a455206650048e784d750533ce2858bcabe2f528c92e/tasks)

After their death, the container was gone (successfully removed)

Version

Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:02:53 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:02:53 2016
OS/Arch: linux/amd64

@genezys
Copy link

genezys commented Aug 24, 2016

There seems to be 2 different problems here as I am unable to fix my issue using @simkim's solution.

# docker rm b1ed3bf7dd6e
Error response from daemon: Driver aufs failed to remove root filesystem b1ed3bf7dd6e5d0298088682516ec8796d93227e4b21b769b36e720a4cfcb353: rename /var/lib/docker/aufs/mnt/acf9b10e85b8ad53e05849d641a32e646739d4cfa49c1752ba93468dee03b0cf /var/lib/docker/aufs/mnt/acf9b10e85b8ad53e05849d641a32e646739d4cfa49c1752ba93468dee03b0cf-removing: device or resource busy
# ls /sys/fs/cgroup/devices/docker/b1ed3bf7dd6e5d0298088682516ec8796d93227e4b21b769b36e720a4cfcb353
ls: cannot access /sys/fs/cgroup/devices/docker/b1ed3bf7dd6e5d0298088682516ec8796d93227e4b21b769b36e720a4cfcb353: No such file or directory
# mount | grep acf9b10e85b8ad53e05849d641a32e646739d4cfa49c1752ba93468dee03b0cf

In my case, the cgroup associated with my container seems to be correctly deleted. The filesystem is also unmounted.

The only solution for me is still to restart the Docker daemon.

@simkim
Copy link

simkim commented Aug 24, 2016

today same problem than @genezys

  • docker compose app with 4 container (rails, worker, redis, postgresql)
  • docker-compose rm lead to device busy error on the 4 containers with cgroup gone
  • fuser -m on one filesystem show a bunch of process :
    • the pid of dockerd with the m flag (mmap'ed file or shared library)
    • other pids
  • The other pids are the pids of another docker compose app with 4 container (django, rqworker, redis, postgresql). How is it possible ??
  • docker-compose rm on the second app lead to the same error
  • but now the first fuser -m show only the dockerd process with the m flag for all 8 containers

@cpuguy83
Copy link
Member

This appears to have gotten worse in 1.12... I have (some) idea of what may have caused this, but not quite sure of the solution (short of a revert).
One thing I have noticed is in kernel 3.16 and higher, we do not get the busy error from the kernel anymore.

@simkim
Copy link

simkim commented Aug 24, 2016

Yes I upgraded to 1.12 yesterday from 1.11 and now I got this problem two times in 2 days, never had it before on this host

@simkim
Copy link

simkim commented Aug 24, 2016

@genezys and myself are on debian 8, 3.16.7-ckt25-2+deb8u3

@simkim
Copy link

simkim commented Aug 25, 2016

When @genezys and I run "docker-compose stop && docker-compose rm -f --all && docker-compose up -d", since docker 1.12 :

  • A lot of time after restarting docker : 0 failure
  • Everyday, the first time in the morning when arriving at work : 100% failure on rm

I tried to run all cron task during the day in case something was done during the night but it don't trigger the bug.

@simkim
Copy link

simkim commented Aug 25, 2016

Same information with more details, we can provide more information as requested as it append every morning.

Stop and remove

Stopping tasappomatic_worker_1 ... done
Stopping tasappomatic_app_1 ... done
Stopping tasappomatic_redis_1 ... done
Stopping tasappomatic_db_1 ... done
WARNING: --all flag is obsolete. This is now the default behavior of `docker-compose rm`
Going to remove tasappomatic_worker_1, tasappomatic_app_1, tasappomatic_redis_1, tasappomatic_db_1
Removing tasappomatic_worker_1 ... error
Removing tasappomatic_app_1 ... error
Removing tasappomatic_redis_1 ... error
Removing tasappomatic_db_1 ... error

ERROR: for tasappomatic_app_1  Driver aufs failed to remove root filesystem a1aa9d42e425c16718def9e654dc700ff275d180434e32156230f4d1900cc417: rename /var/lib/docker/aufs/mnt/c243cc7329891de9584159b6ba8717850489b4010dfcc8b782c3c09b9f26f665 /var/lib/docker/aufs/mnt/c243cc7329891de9584159b6ba8717850489b4010dfcc8b782c3c09b9f26f665-removing: device or resource busy

ERROR: for tasappomatic_redis_1  Driver aufs failed to remove root filesystem b736349766266140e91780e3dbbcaf75edb9ad35902cbc7a6c8c5dcb2dfefe28: rename /var/lib/docker/aufs/mnt/b474a7c91ad77920dfb00dc3a0ab72bc22964ae3018e971d0d51e6ebe8566aeb /var/lib/docker/aufs/mnt/b474a7c91ad77920dfb00dc3a0ab72bc22964ae3018e971d0d51e6ebe8566aeb-removing: device or resource busy

ERROR: for tasappomatic_db_1  Driver aufs failed to remove root filesystem 1cc473718bd19d6df3239e84c74cd7322306486aa1d2252f30472216820fe96e: rename /var/lib/docker/aufs/mnt/d4162a6ef7a9e9e65bd460d13fcce8adf5f9552475b6366f14a19ebd3650952a /var/lib/docker/aufs/mnt/d4162a6ef7a9e9e65bd460d13fcce8adf5f9552475b6366f14a19ebd3650952a-removing: device or resource busy

ERROR: for tasappomatic_worker_1  Driver aufs failed to remove root filesystem eeadc938d6fb3857a02a990587a2dd791d0f0db62dc7a74e17d2c48c76bc2102: rename /var/lib/docker/aufs/mnt/adecfa9d22618665eba7aa4d92dd3ed1243f4287bd19c89617d297056f00453a /var/lib/docker/aufs/mnt/adecfa9d22618665eba7aa4d92dd3ed1243f4287bd19c89617d297056f00453a-removing: device or resource busy
Starting tasappomatic_db_1
Starting tasappomatic_redis_1

ERROR: for redis  Cannot start service redis: Container is marked for removal and cannot be started.

ERROR: for db  Cannot start service db: Container is marked for removal and cannot be started.
ERROR: Encountered errors while bringing up the project.

Inspecting mount

fuser -m /var/lib/docker/aufs/mnt/c243cc7329891de9584159b6ba8717850489b4010dfcc8b782c3c09b9f26f665
/var/lib/docker/aufs/mnt/c243cc7329891de9584159b6ba8717850489b4010dfcc8b782c3c09b9f26f665:  5620  5624  5658  6425  6434 14602m

Same set of process for the 4 containers

Inspecting process

5620 5624 6434 another postgresql container ()
5658 worker from another container
6425 django from another container
14602m dockerd

systemd,1
  └─dockerd,14602 -H fd://
      └─docker-containe,14611 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
          └─docker-containe,5541 2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a /var/run/docker/libcontainerd/2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a docker-runc
              └─postgres,5565
                  └─postgres,5620
systemd,1
  └─dockerd,14602 -H fd://
      └─docker-containe,14611 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
          └─docker-containe,5541 2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a /var/run/docker/libcontainerd/2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a docker-runc
              └─postgres,5565
                  └─postgres,5624
systemd,1
  └─dockerd,14602 -H fd://
      └─docker-containe,14611 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
          └─docker-containe,5642 0364f4ace6e4d1746f8c3e31f872438a592ac07295dd232d92bf64cf729d7589 /var/run/docker/libcontainerd/0364f4ace6e4d1746f8c3e31f872438a592ac07295dd232d92bf64cf729d7589 docker-runc
              └─pootle,5658 /usr/local/bin/pootle rqworker
systemd,1
  └─dockerd,14602 -H fd://
      └─docker-containe,14611 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
          └─docker-containe,5700 bd3fb1c8c36ec408bcf53c8501f95871950683c024919047f5423640e377326d /var/run/docker/libcontainerd/bd3fb1c8c36ec408bcf53c8501f95871950683c024919047f5423640e377326d docker-runc
              └─run-app.sh,5716 /run-app.sh
                  └─pootle,6425 /usr/local/bin/pootle runserver --insecure --noreload 0.0.0.0:8000
systemd,1
  └─dockerd,14602 -H fd://
      └─docker-containe,14611 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
          └─docker-containe,5541 2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a /var/run/docker/libcontainerd/2486fd7f494940619b54fa9b4cedc52c8175988c5ae3bb1dca382f0aaee4f72a docker-runc
              └─postgres,5565
                  └─postgres,6434

@scher200
Copy link

has anyone a better solution then restarting the docker service (version 1.12)?

@genezys
Copy link

genezys commented Sep 29, 2016

A workaround was proposed in #25718 to set MountFlags=private in the docker.service configuration file of systemd. See #25718 (comment) and my following comment.

So far, this has solved the problem for me.

@anusha-ragunathan
Copy link
Contributor

@genezys : Note the side effect of this workaround that I've explained in #25718 (comment)

@gurpreetbajwa
Copy link

gurpreetbajwa commented Oct 15, 2016

I was getting something like this:

Error response from daemon: Driver aufs failed to remove root filesystem 6b583188bfa1bf7ecf2137b31478c1301e3ee2d5c98c9970e5811a3dd103016c: rename /var/lib/docker/aufs/mnt/6b583188bfa1bf7ecf2137b31478c1301e3ee2d5c98c9970e5811a3dd103016c /var/lib/docker/aufs/mnt/6b583188bfa1bf7ecf2137b31478c1301e3ee2d5c98c9970e5811a3dd103016c-removing: device or resource busy

I simply searched for "6b583188bfa1bf7ecf2137b31478c1301e3ee2d5c98c9970e5811a3dd103016c" and found it was located in multiple folders under docker/
Deleted all those files and attempted deleting docker container again using :sudo rm "containerId"
And it worked.

Hope it helps!

@k-bx
Copy link

k-bx commented Oct 20, 2016

The thing is, I can't remove that file. And lsof doesn't show any user of that file. I suspect this kernel bug so I just did sudo apt-get install linux-image-generic-lts-xenial on my 14.04, hoping it'll help.

@oopschen
Copy link

I encouter same problem and i google for while. It seems the cadvisor container lock the file.
After remove the cadvisor container, i can remove the files under [dockerroot]/containers/xxxxxx.

@thaJeztah
Copy link
Member

@oopschen yes, that's a known issue; the cAdvisor uses various bind-mounts, including /var/lib/docker, which causes mounts to leak, resulting in this problem.

@oopschen
Copy link

@thaJeztah Is there any solution or alternative for cadvisor? Thanks.

@thaJeztah
Copy link
Member

@oopschen some hints are given in docker/docs#412, but it depends on what you need cAdvisor for to be able to tell what alternatives there are. Discussing alternatives may be a good topic for forums.docker.com

@jeff-kilbride
Copy link

jeff-kilbride commented Dec 18, 2016

Just got this error for the first time on OS X Sierra using docker-compose:

ERROR: for pay-local  Driver aufs failed to remove root filesystem
0f7a073e087e0a5458d28fd13d6fc840bfd2ccc28ff6fc2bd6a6bc7a2671a27f: rename
/var/lib/docker/aufs/mnt/a3faba12b32403aaf055a26f123f5002c52f2afde1bca28e9a1c459a18a22835
/var/lib/docker/aufs/mnt/a3faba12b32403aaf055a26f123f5002c52f2afde1bca28e9a1c459a18a22835-removing: 
structure needs cleaning

I had never seen it before the latest update last night.

$ docker-compose version
docker-compose version 1.9.0, build 2585387
docker-py version: 1.10.6
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j  26 Sep 2016

$ docker version
Client:
 Version:      1.13.0-rc3
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Tue Dec  6 01:15:44 2016
 OS/Arch:      darwin/amd64

Server:
 Version:      1.13.0-rc3
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   4d92237
 Built:        Tue Dec  6 01:15:44 2016
 OS/Arch:      linux/amd64
 Experimental: true

I tried docker rm -fv a couple of times, but always received the same error.

$ docker ps -a
CONTAINER ID        IMAGE                              COMMAND             CREATED             STATUS              PORTS               NAMES
0f7a073e087e        pay-local                          "node app.js"       2 minutes ago       Dead                                    pay-local

In the amount of time it's taken me to type this out, the offending container is now gone.

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

I don't know if it's fixed itself, or if there's still a problem lurking...

EDIT: Just started and stopped the same set of containers using docker-compose several times with no errors, so... ?

@thaJeztah
Copy link
Member

@jeff-kilbride structure needs cleaning is a different message, and may refer to the underlying filesystem; could be specific to Docker for Mac

@thaJeztah
Copy link
Member

Docker 1.13 is an old version and reached end of life in March last year. Current versions of Docker should have fixes for this (but make sure your kernel and distro are up to date as well)

@bamb00
Copy link

bamb00 commented May 22, 2018

@thaJeztah Can you point me to the url that has the end of line information? Docker 1.13.1 support overlay2 by default, why would it be end of life?

@thaJeztah
Copy link
Member

@bamb00 https://docs.docker.com/install/#time-based-release-schedule

Time-based release schedule

Starting with Docker 17.03, Docker uses a time-based release schedule.

  • Docker CE Edge releases generally happen monthly.
  • Docker CE Stable releases generally happen quarterly, with patch releases as needed.

Updates, and patches

  • A given Docker CE Stable release receives patches and updates for one month after the next Docker CE Stable release.
  • A given Docker CE Edge release does not receive any patches or updates after a subsequent Docker CE Edge or Stable release.

@bamb00
Copy link

bamb00 commented May 30, 2018

@thaJeztah I thought this was a kernel issue and not a docker issue. I'm running CentOS 7.4 - 3.10.0-862.3.2.el7.x86_64.

@thaJeztah
Copy link
Member

It's a combination; on CentOS and RHEL, certain kernel features have been backported, but won't be enabled by default; current versions of docker take advantage of those features (in addition to many other improvements that prevent the issue)

@xmj
Copy link

xmj commented May 30, 2018

Has anyone been able to verify that this does not occur in EL7.4 or up?

@Vanuan
Copy link

Vanuan commented May 30, 2018

@xmj Haven't seen this issue since upgrade to 7.4 in November 2017. At the time was using Docker 17.06. Use 18.03 to be extra sure.

@bamb00
Copy link

bamb00 commented Jun 1, 2018

With docker 1.13.1 and CentOS 7.5 (), Do I have to explicitly set /proc/sys/fs/may_detach_mounts to 1? I'm unable to upgrade docker from 1.13.1 for the time being.

@publicocean0
Copy link

publicocean0 commented Jun 6, 2018

i have docker >18 ... i have same problem
rm: impossibile rimuovere "/var/lib/docker/containers/b29f1c32d0fe007feb0ed0ff3c6005a4815af4a6359232e706865762cfe1df73/mounts/shm": Dispositivo o risorsa occupata
rm: impossibile rimuovere "/var/lib/docker/overlay2/ddea08b3871e6d658e3591cc71d40db9bddd4f2ae7d1c9488ac768530ff162d8/merged": Dispositivo o risorsa occupata
docker is stopped

@bamb00
Copy link

bamb00 commented Jun 6, 2018

@publicocean0 Did you try these command?

   cat /proc/mounts |grep docker
   sudo umount /path

@LiverWurst
Copy link

@publicocean0 Did you try these command?

cat /proc/mounts |grep docker
sudo umount /path

...then clear your /etc/sysconfig/docker-storage file.

@bamb00
Copy link

bamb00 commented Jun 18, 2018

Hi @cpuguy83

I'm also getting the error in docker 1.12.6 running kernel 3.10.0-693.21.1.el7.x86_64

   failed: [172.21.56.145] (item=/var/lib/docker) => {"changed": false, "item": "/var/lib/docker", "msg": "rmtree failed: [Errno 16] Device or resource busy: '/var/lib/docker/devicemapper/mnt/31d464385880ecb0972b36040ce912d3018fc91ba2b4f1f4cbf730baad7fa99c'          

Unfortunately, I cannot afford to upgrade out of 1.12.6 for the time being. Is there a workaround like upgrading the kernel and/or using 1.12.6-cs13?

Thanks in advance.

@LiverWurst
Copy link

LiverWurst commented Jun 18, 2018 via email

@bamb00
Copy link

bamb00 commented Jun 18, 2018

This need to be resolve from a production server.

@LiverWurst
Copy link

LiverWurst commented Jun 18, 2018 via email

@cpuguy83
Copy link
Member

cpuguy83 commented Jun 19, 2018 via email

@bamb00
Copy link

bamb00 commented Jun 20, 2018

@cpuguy83

You are referring to setting the 'mount --make-rshared /var/lib/docker/devicemapper' command in /etc/systemd/system/docker.service.d/

I wasn't sure if I need to create any arbitrary .conf file in /etc/systemd/system/docker.service.d to run the Poststart mount command.

@thaJeztah
Copy link
Member

I wasn't sure if I need to create any arbitrary .conf file in /etc/systemd/system/docker.service.d to run the Poststart mount command.

Yes, to make changes to a systemd unit, it's always recommended to create a "drop-in" ("override") file, and never modify the original unit-file; modifying the original file will cause it to not be updated if an updated version becomes available (i.e., when updating the version of docker you're running)

@bamb00
Copy link

bamb00 commented Jun 26, 2018

@LiverWurst Can you explain what you mean by "remove the fedora version"? I'm running CentOS 7.3.

Thanks.

@LiverWurst
Copy link

LiverWurst commented Jun 26, 2018 via email

@RakeshNagarajan
Copy link

https://ekuric.wordpress.com/2015/10/09/docker-complains-about-cannot-remove-device-or-resource-busy/

This blog helped me

@dygos2
Copy link

dygos2 commented Oct 24, 2018

Restart the VM.. should work!

@452
Copy link

452 commented Sep 13, 2019

docker rm -f embedded-java
Error response from daemon: container 8b91b15cb7939c05fb16cd26d13ee67bd33ca04af3a574193cee95f21e27ad2b: driver "aufs" failed to remove root filesystem: could not remove diff path for id 4119de17501b169eb0e4901dae4bc68e388d92a92f371ee53db9b93ec6970b2d: lstat /var/snap/docker/common/var-lib-docker/aufs/diff/4119de17501b169eb0e4901dae4bc68e388d92a92f371ee53db9b93ec6970b2d-removing/home/yocto/build/tmp/work/x86_64-linux/coreutils-native/8.30-r0/build/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3: file name too long

Solution(reboot not help, only rm via sudo):

sudo su
rm -rf /var/snap/docker/common/var-lib-docker/aufs/diff/4119de17501b169eb0e4901dae4bc68e388d92a92f371ee53db9b93ec6970b2d-removing
docker rm -fv embedded-java

@ahuigo
Copy link

ahuigo commented Nov 8, 2019

If you get such error:

Unable to remove filesystem: /var/lib/docker/container/11667ef16239.../

The solution here(No need to execute service docker restart to restart docker):

# 1. find which process(pid) occupy the fs system
$ find /proc/*/mounts  |xargs -n1 grep -l -E '^shm.*/docker/.*/11667ef16239' | cut -d"/" -f3
1302   # /proc/1302/mounts

# 2. kill this process
$ sudo kill -9 1302

@dagelf
Copy link
Contributor

dagelf commented Aug 15, 2021

This is now one of the top Google search results. If docker hangs in my experience it's usually because loses track of netns or overlayfs... this works for me:

sudo su
service docker stop &
sleep 10; killall -9 dockerd;
rm /var/run/docker.* 
for a in `mount|egrep '(docker|netns)'|awk '{print $3}'`; do umount $a; done; 
service docker stop
killall -9 dockerd;
service docker start

If it still chokes up... dockerd -D

If you have autostarting containers breaking it, then disable them while docker is off:

sed s@always@no@ -i /var/lib/docker/containers/*/hostconfig.json

Disclaimer: not sure where else "always" may occur in the hostconfig.json file, but on my containers I only see it under the RestartPolicy section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests