Setting up a personal Kube cluster
Feb 22, 2017
11 minutes read

In this post I’m going to walk through setting up a single-node Kubernetes cluster for personal use. I’ll start from scratch on a newly-provisioned host and end up with a Kubernetes “cluster” with Ingress capabilities. The cluster this process makes is very similar to the cluster I run in my home (see “How this differs from a physical cluster” below).

Why would you want this

A single-node “cluster” (I’m gonna dispense with the scare-quotes from here on) sounds like a heck of a lot of setup when you could just hack together some systemd units and call it a day. Maybe that’s all you need! Some of the reasons I like having this setup are that it’s:

  • a place to experiment with Kubernetes outside of work
  • a way to deploy personal projects to “real” TLS encrypted URLs with minimal overhead
  • the starting point for a lab to experiment with distributed systems
  • an environment (outside of work) where I can try out automated infrastructure patterns

Requirements

To build a cluster, you’ll need:

  • a server running Linux: This post’s examples are from a Digital Ocean droplet with 4GB of RAM running Ubuntu 16.04.
$ doctl compute droplet list
Name                    Public IPv4     Memory  VCPUs   Image                   Status
ubuntu-2gb-nyc3-01      45.55.235.146   4096    2       Ubuntu 16.04.1 x64      active

At home, I’m running the same OS on a physical server. You don’t have to use a totally-fresh server, but it’ll make things easier - we will be taking over port 80 and port 443.

  • a domain name: You’ll get the most benefit if you can set a wildcard DNS record pointing to your server. I’ll be using burnitdown.biz as the domain for my cluster throughout these steps, where I’ve set *.burnitdown.biz to point to my droplet.
$ dig kube.burnitdown.biz +short
45.55.235.146
$ dig somerandomsubdomain.burnitdown.biz +short
45.55.235.146

If you can’t set wildcard records you’ll get through these steps fine with only kube.burnitdown.biz, evepraisal.burnitdown.biz, and irc.burnitdown.biz pointing to your server.

  • an email address: You’ll use this as the contact for your TLS certificates from LetsEncrypt.

Setup

I’ll be using kubeadm, a tool from the Kubernetes project for bootstrapping clusters on arbitrary Linux boxes (as opposed to tools like kops targeted at specific clouds). For the kubeadm portions of this article I’ll be leaning heavily on its getting started guide.

These instructions will show how to get relevant packages and Kubernetes manifests directly from their source. I have, however, collected up the manifests used in a GitHub repo in case you want to see them all in one place.

1) Install Kubernetes packages: the kubeadm guide’s step 1 includes details for how to set up a Google package mirror that includes all the packages we need:

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK
$ cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
> deb http://apt.kubernetes.io/ kubernetes-xenial main
> EOF
$ apt-get update
# ...
$ apt-get install -y docker.io
# ...
$ apt-get install -y kubelet kubeadm kubectl kubernetes-cni
# ...

2) Generate cluster configs: the kubeadm init command generates the cluster’s various configuration files and starts up its services. Before we jump in and run it without options, it’s important to understand some implications of its defaults:

  • --service-cidr=10.96.0.0/12 You should make sure the IP range doesn’t overlap with anything routeable from your host - otherwise you might end up making either your service network unreachable, or masking over other machines you need to use! 10.96.0.0/12 is less likely to matter than the default for the pod network - stay tuned.
$ ip route
default via 45.55.192.1 dev eth0 onlink
10.17.0.0/16 dev eth0  proto kernel  scope link  src 10.17.0.5
45.55.192.0/18 dev eth0  proto kernel  scope link  src 45.55.235.146
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 linkdown

My droplet’s network doesn’t overlap with 10.96.0.0/12 - its internal interface is on 10.17.0.0/16, and it additionally listens to a (non-overlapping) public IP. I’ll leave this setting at the default.

  • --api-external-dns-names This option allows us to specify additional CNs for the certificate kubeadm init generates for the kube-apiserver service. By default, it tries to detect hostname and IP addresses and make them CNs. That is probably fine on my droplet because kubeadm can discover the public IP, but things are trickier on my home cluster (see “Extra considerations for a physical cluster” below). I’ll pass --api-external-dns-names=kube.burnitdown.biz for consistency with that setup.

That’s all I’m worried about here. If your setup is meaningfully different from mine, the reference is handy for learning about other options and their implications.

My run of kubeadm init looks like:

$ kubeadm init --api-external-dns-names=kube.burnitdown.biz
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.3
[tokens] Generated token: "5bd33a.90b60427802097f5"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 38.063960 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 4.507374 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 14.005653 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=5bd33a.90b60427802097f5 45.55.235.146

# note: yes this is the real token, but no, this cluster doesn't exist anymore

kubeadm has a bunch of features for multi-node clusters (like the join token in the output above!). One of those “features” is that, by default, Kubernetes wont schedule non-system pods on the master node. Since this cluster is only going to be one node, I’ll remove the node taint that causes it to shirk duty:

$ kubectl taint nodes --all dedicated-
node "ubuntu-2gb-nyc3-01" tainted

3) Set up a pod network: kubeadm has no default pod network - we must choose from between the CNI options. There’s a handy list of your options on the Kubernetes website, and a lot of reasons why we might pick one over the other (check out Julia Evans’s awesome post). I like Calico, and this is my blog, so that’s what we’ll use for this cluster. Project Calico provides handy kubeadm-targeted documentation. We could just blindly apply the manifest they provide, but the defaults have implications.

The pod network that their manifest creates uses the CIDR 192.168.0.0/16. For the droplet, this is totally fine, but on my home network, that’s the network my WiFi router uses! Edit the manifest to use a range that works for you if the default one doesn’t.

On the droplet, this applies like so:

$ wget -q http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
$ kubectl apply -f calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
job "configure-calico" created

4) breathe! That about wraps up the kubeadm portion! If all has gone well, you’ve got a single-node cluster running the basic set of Kubernetes controllers. Mine seems to be doing well:

$ kubectl get pods --namespace kube-system
NAME                                         READY     STATUS    RESTARTS   AGE
calico-etcd-l41vm                            1/1       Running   0          1m
calico-node-208wc                            2/2       Running   0          1m
calico-policy-controller-917753764-5l3dh     1/1       Running   0          1m
dummy-2088944543-x56wp                       1/1       Running   0          3m
etcd-ubuntu-2gb-nyc3-01                      1/1       Running   0          2m
kube-apiserver-ubuntu-2gb-nyc3-01            1/1       Running   0          3m
kube-controller-manager-ubuntu-2gb-nyc3-01   1/1       Running   0          3m
kube-discovery-1769846148-rgm2z              1/1       Running   0          3m
kube-dns-2924299975-s3h9l                    4/4       Running   0          2m
kube-proxy-vtnvl                             1/1       Running   0          2m
kube-scheduler-ubuntu-2gb-nyc3-01            1/1       Running   0          2m

Super rad! Now, on to setting up an Ingress Controller so the cluster can easily expose services to the Internet.

5) Install nginx ingress: in order to expose web services on port 80443 with publicly resolvable domain names, we need some sort of load balancer that is aware of services running on the cluster and able to route between them. Conveniently, that’s exactly what Ingress Controllers do. I’m going to use the nginx-based ingress controller, but if your host is running somewhere with “real” load balancers (like AWS ELBs) it might make sense to check out other options.

The Ingress project has documentation for how to set up on a kubeadm cluster - I’ll follow that:

$ wget -q https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/kubeadm/nginx-ingress-controller.yaml
$ kubectl apply -f nginx-ingress-controller.yaml
deployment "default-http-backend" created
service "default-http-backend" created
deployment "nginx-ingress-controller" created

6) Install kube-lego: this nifty add-on to nginx-ingress-controller watches for new ingresses and fetches LetsEncrypt certs for the relevant domains! It makes having proper TLS super easy! They provide manifests we can apply here, but you will need to put your own email address in the kube-lego-configmap.yaml before you apply it.

$ wget -q -O kube-lego-namespace.yaml https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/00-namespace.yaml
$ wget -q -O kube-lego-configmap.yaml https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/configmap.yaml
$ wget -q -O kube-lego-deployment.yaml https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/deployment.yaml

# put my email address in the right place...
$ vim kube-lego-configmap.yaml

# ...then apply the manifests
$ kubectl apply -f kube-lego-namespace.yaml
namespace "kube-lego" created
$ kubectl apply -f kube-lego-configmap.yaml
configmap "kube-lego" created
$ kubectl apply -f kube-lego-deployment.yaml
deployment "kube-lego" created

7) Run some services! We should be all set! To put the cluster through it’s paces, I’ve set up a few example applications.

The first one is Glowing Bear, a web frontend to the popular IRC client WeeChat. It’s entirely made of static assets, so my manifest just downloads them and serves them from an nginx pod.

$ wget -q https://raw.githubusercontent.com/sophaskins/setting-up-a-personal-kube-cluster/master/glowing-bear.yaml
$ kubectl apply -f glowing-bear.yaml

# wait a couple of seconds for kube-lego to grab the cert

# check it out at https://irc.burnitdown.biz
$ curl https://irc.burnitdown.biz
<!DOCTYPE html>
<html ng-app="weechat" ng-cloak>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=Edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
    <meta name="apple-mobile-web-app-capable" content="yes">
    <meta name="mobile-web-app-capable" content="yes">
# ...

In a matter of a few seconds, I just deployed web service, put it behind a (admittedly somewhat lowbrow) load-balancer, and automatically fetched a (browser-valid!) TLS certificate. The future is now!

We can run more than just static JS apps - my second example is a Kubernetes deployment of the super-useful EVE Online tool Evepraisal. It’s a Python Flask app that parses lists of items, looks up their prices in the popular EVE market systems, and tells you how much the list is worth. I put together a Docker wrapper and Kubernetes manifests for it:

$ wget -q https://raw.githubusercontent.com/sophaskins/setting-up-a-personal-kube-cluster/master/evepraisal.yaml
$ kubectl apply -f evepraisal.yaml

# wait a couple of seconds for kube-lego to grab the cert
# check it out at https://evepraisal.burnitdown.biz

Wicked rad! Now I’ve verified that I can serve multiple different websites (with different certificates!) from the same IP. Pretty cool!

Extra considerations for a physical cluster

I run pretty much this exact setup at home on a small PC. The main ways it differs are because my PC doesn’t have an internet-routeable IP address - it connects via NAT to my cable modem. I’ve port-forwarded many ports (including 80 and 443) from my external IP to my server, so for the most part things work transparently. One place it doesn’t is kubeadm discovery of the IPs to generate certificates for. I like to use kubectl from other hosts than just directly on my server.

This gets down to why I made sure to pass --api-external-dns-names=kube.burnitdown.biz to kubeadm earlier. This option adds CNs to the TLS certificate generated for kube-apiserver - it needs to be valid for whatever name I’m accessing it under via kubectl. Since kube.burnitdown.biz is a valid name for the server (even if it’s sitting behind NAT, like in my home cluster), kubectl will be happy with it.

Also, as I mentioned in step 3 “Set up a pod network”, the default pod CIDR Calico uses is 192.168.0.0/16, which very much interferes with my home network. Changing it to a non-overlapping RFC 1918 subnet worked great!

Potential pitfalls

Some issues I ran in to while making this post (and maybe you’ll hit too!) are:

  • Host resolv.conf making kube-dns confused - I wrote another blog post about this! The gist is, if your /etc/resolv.conf on the host includes 127.0.0.1, you might have trouble.
  • Calico’s etcd doesn’t clean up after itself - if you’re iterating a lot on cluster configuration and kubeadm options, you might use kubeadm reset to blow away your current configuration and start again. If you do, one thing to note is that the Calico configuration I use runs its own etcd on Kubernetes, which stores its data in the host’s /var/etcd. While kubeadm reset cleans up after the Kubernetes etcd cluster (which stores data in the host’s /var/lib/etcd), it wont clean up the Calico one - you have to do that yourself.
  • The “sock shop” example app uses a lot of RAM: the kubeadm getting started guide suggests a sample application to run run on your cluster, the Sock Shop (a basic ecommerce site that sells socks). While this is a pretty neat example app with a lot of depth, it consumes a lot of RAM. I initially was going to use it in step 7 “Run some services!” but it ran my droplet out of memory! I ran it on my (much beefier) home machine - simply starting it up seems to take 4.5 GB of RAM.

Back to posts