460704 (5) [Avatar] Offline
I run the example with:

k run kubia --image=luksa/kubia --port=8080 --generator=run/v1
k expose rc kubia --type=LoadBalancer --name kubia-http

It seams that kubia example cannot be run on kubeadm cluster. The pending status does not go away:

kubia-http LoadBalancer <pending> 8080:32309/TCP 32m

Or am I missing something?
Marko Lukša (67) [Avatar] Offline
Yes, there's no support for LoadBalancer services on clusters created with kubeadm. I believe there should be a callout somewhere in the chapter, saying that LoadBalancer services don't work in Minikube. I'll mention the kubeadm cluster there also, as I'm just revising the chapters the last time before the book goes into print.

Thanks for reporting.
460704 (5) [Avatar] Offline
Thanks for the answer.
Marko Lukša (67) [Avatar] Offline
I forgot to mention you can access the service through the node port, which is 32309 in your case.

The service should be available at http://<cluster_node_ip>:32309, where cluster_node_ip is the IP of any of your cluster nodes.
460704 (5) [Avatar] Offline
If I wanted to run kubia example on my own cluster of vms, what do I have to install on those vms? What do you recommend? Of course I need load balancing. Actually I need the whole shabang and production ready. Kubespray maybe? Are there any Ansible scripts out there, so that I can automate installation?
Marko Lukša (67) [Avatar] Offline
Are you talking about the cluster being able to expose LoadBalancer services? Or what to use to install Kubernetes?

For LoadBalancer services to work in a custom cluster, you'd need to implement a controller, which provisions the load balancer in whichever way you want (you may be able to find existing solutions for this).

As for installing Kubernetes, use kubeadm or kops.
460704 (5) [Avatar] Offline
I have installed kubernetes with kubeadm, so obviously what I need is to: "LoadBalancer services to work in a custom cluster", but I'm not sure if I understand what do I have to do.
Marko Lukša (67) [Avatar] Offline
Don't worry about the load balancer stuff. It's a relatively unimportant detail. When using a kubeadm or Minikube-created cluster, you can access your application through one of the node's IP and the node port of the service (it's the port shown after the colon in the list of services you get with kubectl get services).

The reason the book asks you to create a LoadBalancer service is purely because the example was originally meant to be run on Google Container Engine, and exposing it through a LoadBalancer is by far the simplest method there. On local clusters, using the worker node IP and node port of the service is the simplest.
460704 (5) [Avatar] Offline
I'll obey and not worry about load balancing (for now), because there are tons of things that I have to do beforehand. Thank you for your kind replies and I wish you good luck with this excellent book.