6

I have followed this tutorial in order to run kubernetes cluster locally in a Docker container. When I run kubectl get nodes, I get:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

I have noticed that some containers started by kubelet, like apiserver, are exited. This is the output of docker ps -a:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 778bc9a9a93c gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" 3 seconds ago Exited (255) 2 seconds ago k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_de6ff8f9 12dd99c83c34 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" 3 seconds ago Exited (7) 2 seconds ago k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_3283400b ef7383fa9203 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" 4 seconds ago Exited (7) 4 seconds ago k8s_setup.e5aa3216_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87beca1b b3896f4896b1 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube scheduler" 5 seconds ago Up 4 seconds k8s_scheduler.fc12fcbe_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_16584c07 e9b1bc5aeeaa gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" 5 seconds ago Exited (255) 4 seconds ago k8s_apiserver.78ec1de_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_87e1ad70 c81dbe181afa gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube controlle" 5 seconds ago Up 4 seconds k8s_controller-manager.70414b65_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_1e30d242 63dfa0fb0881 gcr.io/google_containers/etcd:2.2.1 "/usr/local/bin/etcd " 5 seconds ago Up 4 seconds k8s_etcd.7e452b0b_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_94a862fa 6bb963ef351d gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube proxy --m" 5 seconds ago Up 4 seconds k8s_kube-proxy.9a9f4853_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_6098241c 311e2788de45 gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 4 seconds k8s_POD.6059dfa2_k8s-master-sw-ansible01_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_79e4e3e8 3b3cf3ada645 gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 4 seconds k8s_POD.6059dfa2_k8s-etcd-sw-ansible01_default_1df6a8b4d6e129d5ed8840e370203c11_9eb869b9 aa7efd2154fb gcr.io/google_containers/pause:2.0 "/pause" 5 seconds ago Up 5 seconds k8s_POD.6059dfa2_k8s-proxy-sw-ansible01_default_5e5303a9d49035e9fad52bfc4c88edc8_b66baa5f c380b4a9004e gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube kubelet -" 12 seconds ago Up 12 seconds kubelet 

Info

  • Docker version: 1.10.3

  • Kubernetes version: 1.2.2

  • OS: Ubuntu 14.04

Docker run command

docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true --name=kubelet -d gcr.io/google_containers/hyperkube-amd64:v1.2.2 /hyperkube kubelet --containerized --hostname-override="172.20.34.112" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --allow-privileged=true --v=2 

kubelet container logs

I0422 11:04:45.158370 541 plugins.go:56] Registering credential provider: .dockercfg I0422 11:05:25.199632 541 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs" I0422 11:05:25.199788 541 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir" I0422 11:05:25.199863 541 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd" I0422 11:05:25.199903 541 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo" I0422 11:05:25.199948 541 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path" I0422 11:05:25.199982 541 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs" I0422 11:05:25.200023 541 plugins.go:291] Loaded volume plugin "kubernetes.io/secret" I0422 11:05:25.200059 541 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi" I0422 11:05:25.200115 541 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs" I0422 11:05:25.200170 541 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim" I0422 11:05:25.200205 541 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd" I0422 11:05:25.200249 541 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder" I0422 11:05:25.200289 541 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs" I0422 11:05:25.200340 541 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api" I0422 11:05:25.200382 541 plugins.go:291] Loaded volume plugin "kubernetes.io/fc" I0422 11:05:25.200430 541 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker" I0422 11:05:25.200471 541 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file" I0422 11:05:25.200519 541 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap" I0422 11:05:25.200601 541 server.go:645] Started kubelet E0422 11:05:25.200796 541 kubelet.go:956] Image garbage collection failed: unable to find data for container / I0422 11:05:25.200843 541 server.go:126] Starting to listen read-only on 0.0.0.0:10255 I0422 11:05:25.201531 541 server.go:109] Starting to listen on 0.0.0.0:10250 E0422 11:05:25.201684 541 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping) I0422 11:05:25.206656 541 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0422 11:05:25.206714 541 manager.go:123] Starting to sync pod status with apiserver I0422 11:05:25.206888 541 kubelet.go:2356] Starting kubelet main sync loop. I0422 11:05:25.207036 541 kubelet.go:2365] skipping pod synchronization - [container runtime is down] I0422 11:05:25.333829 541 factory.go:233] Registering Docker factory I0422 11:05:25.336920 541 factory.go:97] Registering Raw factory I0422 11:05:25.392065 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:25.392148 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:25.398401 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:25.492441 541 manager.go:1003] Started watching for new ooms in manager I0422 11:05:25.493365 541 oomparser.go:182] oomparser using systemd I0422 11:05:25.495129 541 manager.go:256] Starting recovery of all containers I0422 11:05:25.583462 541 manager.go:261] Recovery completed I0422 11:05:25.622022 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:25.622065 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:25.622485 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:26.038631 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:26.038753 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:26.039300 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:26.852863 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:26.852892 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:26.853320 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:28.468911 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:28.468937 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:28.469355 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.207357 541 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11), k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8), k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" E0422 11:05:30.207416 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.207465 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.207505 541 kubelet.go:2307] error getting node: node '172.20.34.112' is not in cache E0422 11:05:30.209316 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused E0422 11:05:30.209332 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.209396 541 manager.go:1688] Need to restart pod infra container for "k8s-proxy-172.20.34.112_default(5e5303a9d49035e9fad52bfc4c88edc8)" because it is not found W0422 11:05:30.209828 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-etcd-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused E0422 11:05:30.209899 541 kubelet.go:1764] Failed creating a mirror pod for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused W0422 11:05:30.212690 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-proxy-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.214297 541 manager.go:1688] Need to restart pod infra container for "k8s-master-172.20.34.112_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)" because it is not found W0422 11:05:30.214935 541 manager.go:408] Failed to update status for pod "_()": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-172.20.34.112: dial tcp 127.0.0.1:8080: connection refused I0422 11:05:30.220596 541 manager.go:1688] Need to restart pod infra container for "k8s-etcd-172.20.34.112_default(1df6a8b4d6e129d5ed8840e370203c11)" because it is not found I0422 11:05:31.693419 541 kubelet.go:2754] Recording NodeHasSufficientDisk event message for node 172.20.34.112 I0422 11:05:31.693456 541 kubelet.go:1134] Attempting to register node 172.20.34.112 I0422 11:05:31.694191 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused 

api server container (exited) logs

I0425 13:18:55.516154 1 genericapiserver.go:82] Adding storage destination for group batch W0425 13:18:55.516177 1 server.go:383] No RSA key provided, service account token authentication disabled F0425 13:18:55.516185 1 server.go:410] Invalid Authentication Config: open /srv/kubernetes/basic_auth.csv: no such file or directory 
1

2 Answers 2

6
+50

I've reproduced your issue before, and I've also successfully run the kubelet container a couple times.

Here is the exact command I am running when it succeeds: export K8S_VERSION=v1.2.2 docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ --name=kubelet \ -d \ gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --allow-privileged=true --v=2

I removed these 2 settings from the tutorial's suggested command because DNS wasn't needed in my case: --cluster-dns=10.0.0.10 --cluster-domain=cluster.local

Also, I started the docker SSH portal in the background before starting the kubelet container, using this command:

docker-machine ssh `docker-machine active` -f -N -L "8080:localhost:8080"

I also did not make any changes to SSL certificates.

I am able to run the kubelet container with K8S_VERSION=v1.2.2 and K8S_VERSION=1.2.3.

On a successful run, I observe all the processes are "Up"; none are "Exited":

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 42e6d973f624 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube apiserver" About an hour ago Up About an hour k8s_apiserver.78ec1de_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_5d260d3c 135c020f14b4 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube controlle" About an hour ago Up About an hour k8s_controller-manager.70414b65_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_9b338f27 873656c913fd gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/setup-files.sh IP:1" About an hour ago Up About an hour k8s_setup.e5aa3216_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ff89fc7c 8b12f5f20e8f gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube scheduler" About an hour ago Up About an hour k8s_scheduler.fc12fcbe_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_ea90af75 93d9b2387b2e gcr.io/google_containers/etcd:2.2.1 "/usr/local/bin/etcd " About an hour ago Up About an hour k8s_etcd.7e452b0b_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_d66f84f0 f6e45af93ee9 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube proxy --m" About an hour ago Up About an hour k8s_kube-proxy.9a9f4853_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_b0084efc f6748442f2d1 gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-master-127.0.0.1_default_4c6ab43ac4ee970e1f563d76ab3d3ec9_f4758f9b d515c10910c4 gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-etcd-127.0.0.1_default_1df6a8b4d6e129d5ed8840e370203c11_3248c1d6 958f4865df9f gcr.io/google_containers/pause:2.0 "/pause" About an hour ago Up About an hour k8s_POD.6059dfa2_k8s-proxy-127.0.0.1_default_5e5303a9d49035e9fad52bfc4c88edc8_3850b11e 2611ee951476 gcr.io/google_containers/hyperkube-amd64:v1.2.2 "/hyperkube kubelet -" About an hour ago Up About an hour kubelet 

On a successful run, I also see similar log output as you when I run docker logs kubelet. In particular, I see: Unable to register 127.0.0.1 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused

But, eventually, it works: $ kubectl -s http://localhost:8080 cluster-info Kubernetes master is running at http://localhost:8080 $ kubectl get nodes NAME STATUS AGE 127.0.0.1 Ready 1h 192.168.99.100 NotReady 1h localhost NotReady 1h

Other tips:

  • You might need to wait a little bit for the API server to start up. For example, this guy uses a while loop: until $(kubectl -s http://localhost:8080 cluster-info &> /dev/null); do sleep 1 done

  • On Mac OS X, I've noticed the Docker VM can get unstable whenever my wireless changes, or when I suspend/resume my laptop. I can usually resolve such issues with a docker-machine restart.

  • When experimenting with kubelet, I'll often want to stop the kubelet container and stop/remove all containers in my docker. I do that by running docker stop kubelet && docker rm -f $(docker ps -aq)

Info about my setup, OS X El Capitan 10.11.2:

$ docker --version Docker version 1.10.3, build 20f81dd $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} 
1
2

[I'm not a kubernetes expert - just following my nose here].

kubelet's failure is apparently a consequent symptom of port 8080 being closed, which you noted at the beginning of your question. It's not where you should be focused.

Note the following line in the logs you showed us:

I0422 11:05:28.469355 541 kubelet.go:1137] Unable to register 172.20.34.112 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused 

So, kubelet is trying to contact the apiserver, and getting connection refused. That's not surprising given that as you note, it has exited.

The log lines you show us for the apiserver show it complaining about not having a certificate. The certificates are normally in /var/run/kubernetes (noted here). That falls within the /var/run volume that's set up in the docker command for running kubernetes in your tutorial. I'd be looking closely at that volume specification to see if you've made any mistakes, and to see if the certificates are in there as expected.

There's a few bits at https://github.com/kubernetes/kubernetes/issues/11000 which might be useful for figuring out what's going wrong with your certs, including devurandom providing a script for creating the certs if that's what's needed.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.