David M. Karr (83) [Avatar] Offline
#1
So I'm following the instructions in chapter 2 to get the toy kubia image running in a pod. I already had a Docker Hub login because of some earlier experiments with pure Docker images.

As described in chapter 2 of the book, I built the toy "kubia" image, and I was able to push it to Docker Hub. I verified this again by logging into Docker Hub and seeing the image.

I'm doing this on Centos7.

I was able to build the image and push it to docker hub without any difficulty.

I then run the following to create the replication controller and pod running my image:

kubectl run kubia --image=davidmichaelkarr/kubia --port=8080 --generator=run/v1


I waited a while for statuses to change, but it never finishes downloading the image, when I describe the pod, I see something like this:

  Normal   Scheduled              24m                 default-scheduler  Successfully assigned kubia-25th5 to minikube
  Normal   SuccessfulMountVolume  24m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-x5nl4"
  Normal   Pulling                22m (x4 over 24m)   kubelet, minikube  pulling image "davidmichaelkarr/kubia"
  Warning  Failed                 22m (x4 over 24m)   kubelet, minikube  Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)


So I then constructed the following command:

curl -v -u 'davidmichaelkarr:**' 'https://registry-1.docker.io/v2/'

Which uses the same password I use for Docker Hub (they should be the same, right?).

This gives me the following:

* About to connect() to proxy *** port 8080 (#0)
*   Trying **.**.**.**...
* Connected to *** (**.**.**.**) port 8080 (#0)
* Establish HTTP proxy tunnel to registry-1.docker.io:443
* Server auth using Basic with user 'davidmichaelkarr'
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.docker.io
*       start date: Aug 02 00:00:00 2017 GMT
*       expire date: Sep 02 12:00:00 2018 GMT
*       common name: *.docker.io
*       issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US
* Server auth using Basic with user 'davidmichaelkarr'
> GET /v2/ HTTP/1.1
> Authorization: Basic ***
> User-Agent: curl/7.29.0
> Host: registry-1.docker.io
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io"
< Date: Wed, 24 Jan 2018 18:34:39 GMT
< Content-Length: 87
< Strict-Transport-Security: max-age=31536000
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
* Connection #0 to host *** left intact


I didn't know what else to try, so I asked about this on StackOverflow: https://stackoverflow.com/questions/48429591/kubectl-cant-connect-to-docker-registry-to-download-image

The answer I got talks about the need to set up a secret that I register in kubectl. I haven't started this yet. Is this just something that was missed in the book, or is there a reason why this might not have been required?
Marko Lukša (67) [Avatar] Offline
#2
A few things about pulling images:
- you only need imagePullSecrets when pulling from a private repo (this is explained somewhere in the book - most likely in the chapter about secrets);
- log into hub.docker.com and make sure your repo is marked as public, so you don't need to deal with imagePullSecrets at this point.
- I'm not sure if Docker Hub uses basic http auth; I've tried the curl command and it doesn't not work for me.

But the problem in your case is that the Docker daemon running in the minikube VM isn't using your internal proxy, and thus can't reach the registry at all. It's not an authentication problem.

You can confirm this by running the curl command from inside the minikube VM (run
minikube ssh
to log into it).

To fix the problem, make sure you run minikube with the following options:
--docker-env http_proxy=http://yourproxy:port --docker-env https_proxy=http://yourproxy:port --docker-env no_proxy=192.168.99.0/24







David M. Karr (83) [Avatar] Offline
#3
Hmm, this seemed promising, but it didn't make any difference. If I ran that command, and then did "minikube ssh", what should I see in that environment that would tell me whether or not these variables were set properly or not? I wouldn't expect to see anything in the simple "env" output, because those are intended to be docker-specific.
Marko Lukša (67) [Avatar] Offline
#4
Not sure how you can see the environment configured for the Docker daemon, but here's what I tried:

I ran minikube like this (pointed it to 8.8.8.8 instead of an actual proxy):

$ minikube start --docker-env http_proxy=http://8.8.8.8:88 --docker-env https_proxy=http://8.8.8.8:88 --docker-env no_proxy=192.168.99.0/24


Then, if I examine the minikube logs, I see that it's trying to use the configured proxy:

$ minikube logs | grep 8.8.8.8
Jan 25 20:49:41 minikube localkube[3536]: E0125 20:49:41.129124    3536 remote_image.go:108] PullImage "gcr.io/google_containers/heapster-amd64:v1.5.0-beta.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v1/_ping: http: error connecting to proxy http://8.8.8.8:88: dial tcp 8.8.8.8:88: i/o timeout


So, I suggest you try minikube logs to see if there's any trace of whether the proxy is being used or not. In any case, you should see the error preventing the image from being pulled.

Oh, maybe you need to delete the VM first with minikube delete... but I doubt that...
David M. Karr (83) [Avatar] Offline
#5
LOL. Your last "doubtful" comment was the key. When you told me I needed the "--docker-env" settings, I thought that "minikube stop" would do everything that was required to start over. I just did "minikube delete" and started over, and now the image was successfully downloaded and the container started. However, I also looked at "minikube logs" and grepped for "proxy", and it never found anything that looked like a connection that was going through a proxy.
Marko Lukša (67) [Avatar] Offline
#6
Yeah, if you just run minikube start, it uses the existing VM, so any flags you specify may or may not be honored. I've learned that any time minikube doesn't behave like it should, running minikube delete might be the solution.

I probably should have mentioned that in the book (and what to do if you need to use a proxy). Noted for 2nd edition.

Thank you for reporting this.

David M. Karr (83) [Avatar] Offline
#7
Hmm, looks like there are more issues related to this. Following the steps in the book, I then created the kubia-http LoadBalancer. When I tried to do "minikube service kubia-http" to get the host:port, I got this:

Error opening service: Could not find finalized endpoint being pointed to by kubia-http: Error validating service: Error getting service kubia-http: Get https://192.168.99.100:8443/api/v1/namespaces/default/services/kubia-http: Proxy Authentication Required


So this looks like it's seeing my https_proxy setting, but not my "no_proxy" setting, or perhaps the format I used isn't working. I found that the documentation for specifying IP addresses in no_proxy is ambiguous, and especially address ranges. I thought it would support CIDR, so I set it to "192.168.0.0/16", but for this piece that didn't appear to work.
Marko Lukša (67) [Avatar] Offline
#8
You probably need to set no_proxy in your local OS (the one you're running the minikube service command in).
David M. Karr (83) [Avatar] Offline
#9
That was already set. Doesn't make any difference.

Note that I found the following related issue for minikube, which I've commented on: https://github.com/kubernetes/minikube/issues/2453 .
Marko Lukša (67) [Avatar] Offline
#10
Hmm, isn't the Proxy Authentication Required message coming from your proxy? That would imply that the request to 192.168.99.100 is going to the proxy instead of to the minikube VM directly.

Have you tried using the actual IP address instead of the CIDR?

I've just ran the following, and it still wants to go through the proxy:

http_proxy=http://8.8.8.8 no_proxy=192.168.0.0/16 minikube service kubia


But, using the IP works:

http_proxy=http://8.8.8.8 no_proxy=192.168.99.100 minikube service kubia
Opening kubernetes service default/kubia in default browser...

Marko Lukša (67) [Avatar] Offline
#11
Ah, yes, using CIDR in no_proxy isn't supported in Go's net/http library, which minikube uses. See https://github.com/golang/go/issues/16704
591139 (1) [Avatar] Offline
#12
Hi,

I am reading the book and now on Chapter 2. As suggested I have followed the steps and created a dockerfile and app.js files.

Whenever I try to build the image it just hangs and does not proceed ahead.

Below is the Dockerfile content: -

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

Below is the app.js file content: -

const http = require('http');
const os = require('os');
console.log("Kubia server starting...");
var handler = function(request, response) {
console.log("Received request from " + request.connection.remoteAddress);
response.writeHead(200);
response.end("You've hit " + os.hostname() + "\n");
};
var www = http.createServer(handler);
www.listen(8080);

Jigars-MacBook-Pro:~ jigars$ docker build -t kubia:latest -f Dockerfile .

It doesn't go ahead from here. Not sure what is the issue. I don't find the image when I run docker images.

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest e1ddd7948a1c 2 weeks ago 1.16MB
node 7 d9aed20b68a4 12 months ago 660MB

Let me know what am I missing. Awaiting reply at the earliest.
tempusfugit (143) [Avatar] Offline
#13
No issue here:

$ docker build -t kubia:latest -f Dockerfile .
Sending build context to Docker daemon  7.168kB
Step 1/3 : FROM node:8
 ---> 55791187f71c
Step 2/3 : ADD app.js /app.js
 ---> Using cache
 ---> 2823e77b9a87
Step 3/3 : ENTRYPOINT ["node","app.js"]
 ---> Using cache
 ---> ef9855d6f05f
Successfully built ef9855d6f05f
Successfully tagged kubia:latest
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
$ docker-compose --version
docker-compose version 1.22.0, build f46880f
$ docker-machine --version
docker-machine version 0.15.0, build b48dc28d
$ 

FROM node:8
ADD app.js /app.js
ENTRYPOINT ["node","app.js"]
  • macOS High Sierra Version 10.13.5 (17F77)

  • Docker Version 18.06.1-ce-mac73 (26764) "Stable"


  • Get started with Docker for Mac

    Note that once you resolve this issue the recommended minikube installation is via

    $ brew cask install minikube

    $ brew search minikube
    ==> Casks
    minikube ?
    $


    And while it may be more "authentic" to push the image to the public registry to then only load it into minikube you can save yourself some work (and downloads) by building the images directly inside the minikube environment as described in Kubernetes: Hello Minikube - Create a Docker container image.

    $ open --background -a Docker
    $ minikube start --vm-driver=hyperkit
      Starting local Kubernetes v1.10.0 cluster...
      Starting VM...
      Getting VM IP address...
      Moving files into cluster...
      Setting up certs...
      Connecting to cluster...
      Setting up kubeconfig...
      Starting cluster components...
      Kubectl is now configured to use the cluster.
      Loading cached images from config file.
    $ kubectl config use-context minikube
      Switched to context "minikube".
    $ kubectl cluster-info
      Kubernetes master is running at https://192.168.64.2:8443
      KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    $ eval $(minikube docker-env)
    $ docker build -t kubia:v1 .
      Sending build context to Docker daemon  7.168kB
      Step 1/3 : FROM node:8
       ---> 55791187f71c
      Step 2/3 : ADD app.js /app.js
       ---> Using cache
       ---> 5d84158e73dc
      Step 3/3 : ENTRYPOINT ["node","app.js"]
       ---> Using cache
       ---> e67e23248451
      Successfully built e67e23248451
      Successfully tagged kubia:v1
    $ docker images
      REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
      kubia                                      v1                  e67e23248451        12 days ago         673MB
      node                                       8                   55791187f71c        2 weeks ago         673MB
      busybox                                    latest              e1ddd7948a1c        4 weeks ago         1.16MB
      k8s.gcr.io/kube-proxy-amd64                v1.10.0             bfc21aadc7d3        5 months ago        97MB
      k8s.gcr.io/kube-apiserver-amd64            v1.10.0             af20925d51a3        5 months ago        225MB
      k8s.gcr.io/kube-controller-manager-amd64   v1.10.0             ad86dbed1555        5 months ago        148MB
      k8s.gcr.io/kube-scheduler-amd64            v1.10.0             704ba848e69a        5 months ago        50.4MB
      k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5b        5 months ago        193MB
      k8s.gcr.io/kube-addon-manager              v8.6                9c16409588eb        6 months ago        78.4MB
      k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8              c2ce1ffb51ed        8 months ago        41MB
      k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8              6f7f2dc7fab5        8 months ago        42.2MB
      k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8              80cc5ea4b547        8 months ago        50.5MB
      k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        8 months ago        742kB
      k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1              e94d2f21bc0c        8 months ago        121MB
      gcr.io/k8s-minikube/storage-provisioner    v1.8.1              4689081edb10        9 months ago        80.8MB
    $ kubectl run kubia --image=kubia:v1 --port=8080 --image-pull-policy=Never --generator=run/v1
      replicationcontroller/kubia created
    $ kubectl get rc
      NAME      DESIRED   CURRENT   READY     AGE
      kubia     1         1         1         5s
    $ kubectl get po
      NAME          READY     STATUS    RESTARTS   AGE
      kubia-fsn4m   1/1       Running   0          9s
    $ kubectl expose rc kubia --type=LoadBalancer --name kubia-http
      service/kubia-http exposed
    $ kubectl get svc
      NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
      kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          9d
      kubia-http   LoadBalancer   10.97.142.145   <pending>     8080:31969/TCP   5s
    $ minikube service kubia-http
      Opening kubernetes service default/kubia-http in default browser...
    $ curl http://192.168.64.2:31969/
      You've hit kubia-fsn4m
    $ kubectl delete rc kubia
      replicationcontroller "kubia" deleted
    $ kubectl get po
      NAME          READY     STATUS        RESTARTS   AGE
      kubia-fsn4m   1/1       Terminating   0          3m
    $ kubectl get po
      No resources found.
    $ eval $(minikube docker-env --unset)
    $ minikube stop
      Stopping local Kubernetes cluster...
      Machine stopped.
    $ osascript -e 'quit app "Docker"'
    $