Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using private repo - dial tcp port 53 i/o timeout #1569

Closed
ernestm opened this issue Apr 25, 2017 · 6 comments
Closed

Error using private repo - dial tcp port 53 i/o timeout #1569

ernestm opened this issue Apr 25, 2017 · 6 comments

Comments

@ernestm
Copy link

ernestm commented Apr 25, 2017

Expected behavior

docker login and pull work as normal

Actual behavior

Error on docker login:
Error response from daemon: Get https://.jfrog.io/v1/users/: dial tcp: lookup .jfrog.io on 192.168.65.1:53: read udp 192.168.65.2:51225->192.168.65.1:53: i/o timeout

(then after putting the creds into config.json by hand since login doesn't work)

Error on docker pull: Error response from daemon: Get https://.jfrog.io/v1/_ping: dial tcp: lookup .jfrog.io on 192.168.65.1:53: read udp 192.168.65.2:48725->192.168.65.1:53: i/o timeout

Information

Diagnostic ID:
420E5E97-6084-4DAB-837F-D0D168FB72C4

I'm running Version 17.05.0-ce-rc1-mac8 (16582) edge on OSX 10.11.6 (15G1421). I have restarted docker, restarted my laptop, and tried using a different network. This started happening to me today, but I haven't tried to pull from this repo since before Dockercon (and at least 2 docker version upgrades on my laptop). I am using straight Docker for Mac, no docker-machine.

Having read the various similar bugs on this issue (docker/kitematic#718, moby/moby#24344, moby/moby#13337) I have tried using both Google DNS and OpenDNS as resolvers to no avail. I can resolve the hostname:

ernestmueller$ nslookup <private artifactory repo>.jfrog.io
Server:		208.67.222.222
Address:	208.67.222.222#53

Non-authoritative answer:
<private artifactory repo>.jfrog.io	canonical name = <private>.jfrog.io.
<private>.jfrog.io	canonical name = prod-use1-alb-jfrog-io-shared.elb.jfrog.net.
prod-use1-alb-jfrog-io-shared.elb.jfrog.net	canonical name = alb-jfrog-io-shared-357556443.us-east-1.elb.amazonaws.com.
Name:	alb-jfrog-io-shared-357556443.us-east-1.elb.amazonaws.com
Address: 52.0.46.24
Name:	alb-jfrog-io-shared-357556443.us-east-1.elb.amazonaws.com
Address: 52.202.180.189

I can pull alpine, and can pull from dockerhub and Amazon ECR. One of my colleagues can pull from this private repo using the same credentials.

The only workaround I could get to work was to hardcode one of those IPs in my /etc/hosts file - when I do that, I can login and pull. But of course doing that to a CNAME that's to an ELB with multiple IPs makes baby Jesus cry in the long term.

Steps to reproduce the behavior

  1. docker login .jfrog.io
  2. docker pull /my/container:latest
@djs55
Copy link
Contributor

djs55 commented Apr 26, 2017

Thanks for the report and the diagnostics upload. Comparing the DNS results in the logs from the VM versus the host I think there are 2 problems:

  • the resource records are presented in the wrong order: some resolvers want A records referenced from a CNAME to occur after the CNAME in the list
  • there are unnecessary duplicates of the resource records in the list, which may be harmless but I notice the UDP packet size is technically over the 512 byte limit

@ernestm
Copy link
Author

ernestm commented Apr 26, 2017

Cool, thanks for the explanation. Is this something happening in the docker/VM/OSX stack or is this something the registry provider can alleviate by changing order of their DNS entries and therefore I should report it to them? (JFrog in this case).

@djs55
Copy link
Contributor

djs55 commented Apr 26, 2017

I think I have a fix for this. If you have the time to experiment, could you try the following:

  • download the attached vpnkit.zip, unzip and check the sha1sum:
$ unzip vpnkit.zip 
Archive:  vpnkit.zip
  inflating: vpnkit                  
$ sha1sum vpnkit
ad2a5d9c42c1c19b4c5275f2bd5cd42413556dac  vpnkit
  • quit Docker for Mac
  • take a backup of the old vpnkit binary:
$ mv /Applications/Docker.app/Contents/Resources/bin/vpnkit /Applications/Docker.app/Contents/Resources/bin/vpnkit.backup
  • replace the vpnkit binary:
$ cp vpnkit /Applications/Docker.app/Contents/Resources/bin/vpnkit 
  • restart Docker for Mac

This binary hasn't been signed so you may be prompted to confirm whether it should be allowed to listen on network ports. The binary has not been thoroughly tested -- it's only suitable for testing.

When I query a DNS name from within a container which maps to a chain of CNAMEs, it now looks more regular e.g.:

$ docker run -it alpine sh
/ # apk update
/ # apk add bind-tools
/ # dig <DNS name>
...
;; ANSWER SECTION:
a. 37 IN CNAME  b.
b. 37 IN CNAME   c.
c. 187 IN CNAME d.
d. 37 IN A <ip>
d. 37 IN A <ip>

If you have the time to try it, let me know the results. If it still fails, please upload a fresh diagnostic.

Thanks!

@djs55
Copy link
Contributor

djs55 commented Apr 26, 2017

@ernestm it looks like a bug in the Docker for Mac's DNS forwarder. Many DNS client resolvers are actually pretty tolerant and would cope with this, but I think the Go one is quite strict and hence docker login fails. As far as I can see JFrog's DNS responses are fine -- they're getting garbled on the way through to the VM.

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants