Docker Error response from daemon: service endpoint named.

Hi, I get this strange error when I try to launch docker with a name that gives me this error.

docker: Error response from daemon: service endpoint with name qc.T8 already exists. 

however, there is no container with this name.

 > docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES > sudo docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 3 Server Version: 1.12.3 Storage Driver: aufs Root Dir: /ahdee/docker/aufs Backing Filesystem: extfs Dirs: 28 Dirperm1 Supported: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null bridge host overlay Swarm: inactive Runtimes: runc Default Runtime: runc Security Options: apparmor Kernel Version: 3.13.0-101-generic Operating System: Ubuntu 14.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 64 Total Memory: 480.3 GiB 

Can I do it anyway? thank you A.

+25
source share
6 answers

TL; DR: restart the docker daemon or restart your computer (if you are using it, for example, on a Mac).

Change: Since there are more recent posts below, they answer the question better than mine. Network adapter stuck on daemon. I am updating mine, as it may be at the top of the list and people may not scroll down.

  1. Restarting your docker / docker service / docker-machine daemon is the easiest answer.

  2. best answer (via Shalabha Negi):

 docker network inspect <network name> docker network disconnect <network name> <container id/ container name> 

It is also faster in real time if you can find the network, since restarting the docker machine / daemon / service is, in my experience, a slow thing. If you use this, please scroll down and press +1 in the answer.


So the problem is probably your network adapter (virtual, docker, not real): take a quick look at this: https://github.com/moby/moby/issues/23302 .

Preventing this again is a bit difficult. There seems to be a problem with the docker when the container exits with the wrong status code (e.g. non-zero) that keeps the network open. Then you cannot start a new container with this endpoint.

+17
source

Just in case someone needs it. As @Jons explained, this was a weird network issue. So I solved this by forcing the removal

 docker network disconnect --force bridge qc.T8 

A

+22
source
 docker network inspect <network name> docker network disconnect <network name> <container id/ container name> 

You can also try the following: docker network prune docker system prune system prune system prune these commands will help clear the zombie containers, volume and network. When no command works then do

 sudo service docker restart 

your problem will be solved

+8
source

I created the script a while ago, I think this should help people working with the swarm. Using a docker machine may help a little.

https://gist.github.com/lcamilo15/7aaaebe71852444ea8f1da5c4c9c84b7

 declare -a NODE_NAMES=("node_01", "node_02"); declare -a CONTAINER_NAMES=("container_a", "container_b"); declare -a NETWORK_NAMES=("network_1", "network_2"); for x in "${NODE_NAMES[@]}"; do; docker-machine env $x; eval $(docker-machine env $x) for CONTAINER_NAME in "${CONTAINER_NAMES[@]}"; do; for NETWORK_NAME in "${NETWORK_NAMES[@]}"; do; echo "Disconnecting $CONTAINER_NAME from $NETWORK_NAME" docker network disconnect -f $NETWORK_NAME $CONTAINER_NAME; done; done; done; 
+1
source

This may be due to the fact that a sudden removal of the container may leave the network open for this endpoint (container name).

Try stopping the container before removing it. docker stop <container-name> . Then docker rm <container-name> .

Then docker run <same-container-name> .

0
source
 docker network rm <network name> 

worked for me

0
source

All Articles