[Kubernetes Data Platform][Part 2.2]: Creating highly available Kubernetes cluster with k0s
In this article, I will guide you through the steps to install a Highly Available (HA) Kubernetes cluster on Multipass virtual machines using k0s.
k0s is distributed as a single binary with zero host OS dependencies besides the host OS kernel. It works with any Linux without additional software packages or configuration. Any security vulnerabilities or performance issues can be fixed directly in the k0s distribution that makes it extremely straightforward to keep the clusters up-to-date and secure.
The architecture is as follows:
DEPLOYMENT STEPS
- Initialize Ubuntu Virtual Machines using Multipass. (Details in
multipass_create_instances.sh) - This will create 7 VMs:
k0s-loadbalancer,k0s-master-1,k0s-master-2,k0s-master-3,k0s-worker-1,k0s-worker-2,k0s-worker-3with randomly generated IPs. - Install k0sctl. k0sctl is a bootstrapping and management tool for k0s clusters.
- Edit the
k0sctl.yamlfile. Replace the placeholder IPs with the actual IPs from step 1. - Install and configure HAProxy on the
k0s-loadbalancerVM. This will be the cluster's load balancer. - Install k0s using k0sctl.
- Install basic Kubernetes components.
- Install testing tools.
- Destroy the cluster.
HANDS-ON STEP
Reference Repository: https://github.com/viethqb/data-platform-notes/tree/main/kubernetes/k0s
IMPORTANT: Convention for Bash Scripts
Throughout this series, the following convention applies to Bash scripts:
Commands in the form “> command” should be executed on the local laptop.
Commands in the form “root@kworker1 command” should be executed on the kworker1 VM as the root user.
1. Initialize Ubuntu Virtual Machines using Multipass
multipass_create_instances.sh
#!/usr/bin/env bash
if ! command -v multipass &> /dev/null
then
echo "multipass could not be found"
echo "Check <https://github.com/canonical/multipass> on how to install it"
exit
fi
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# 3 VMs MASTER & 3 WORKER & 1 LB
NUMBER_OF_MASTER=${1:-3}
NUMBER_OF_WORKER=${1:-3}
echo "Create cloud-init to import ssh key..."
# https://github.com/canonical/multipass/issues/965#issuecomment-591284180
cat <<EOF > "${DIR}"/multipass-cloud-init.yml
---
users:
- name: k0s
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /usr/bin/bash
ssh_authorized_keys:
- $( cat "$(ls -1 ~/.ssh/id_*.pub | head -1)" )
packages:
- open-iscsi
- nfs-common
runcmd:
- cp /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak
- sed -i -e '13i\\ nameservers:' /etc/netplan/50-cloud-init.yaml
- sed -i -e '14i\\ addresses:\ [8.8.8.8, 8.8.4.4]' /etc/netplan/50-cloud-init.yaml
- netplan apply
- systemd-resolve --status | grep 'DNS Servers' -A2
- DEBIAN_FRONTEND=noninteractive apt-get update -y && apt-get upgrade -y
- apt-get -y autoremove
EOF
multipass launch jammy \
--name k0s-loadbalancer \
--cpus 1 \
--memory 2048M \
--disk 20G \
--cloud-init "${DIR}"/multipass-cloud-init.yml
for ((i = 1 ; i <= "${NUMBER_OF_MASTER}" ; i++)); do
echo "[${i}/${NUMBER_OF_MASTER}] Creating instance k0s-${i} with multipass..."
multipass launch jammy \
--name k0s-master-"${i}" \
--cpus 1 \
--memory 2048M \
--disk 20G \
--cloud-init "${DIR}"/multipass-cloud-init.yml
done
for ((i = 1 ; i <= "${NUMBER_OF_WORKER}" ; i++)); do
echo "[${i}/${NUMBER_OF_WORKER}] Creating instance k0s-${i} with multipass..."
multipass launch jammy \
--name k0s-worker-"${i}" \
--cpus 1 \
--memory 2048M \
--disk 20G \
--cloud-init "${DIR}"/multipass-cloud-init.yml
done
multipass list
Initialize Virtual Machines
> cd ~/Documents
> git clone https://github.com/viethqb/data-platform-notes.git
> cd data-platform-notes/kubernetes/k0s
> bash ./multipass_create_instances.shTherefore, we have 7 VMs:
k0s-loadbalancer: (10.59.145.165)k0s-master-1: (10.59.145.239)k0s-master-2: (10.59.145.79)k0s-master-3: (10.59.145.92)k0s-worker-1: (10.59.145.225)k0s-worker-2: (10.59.145.19)k0s-worker-3: (10.59.145.106)
2. Install k0sctl
> go install github.com/k0sproject/k0sctl@latest
> k0sctl version ─╯
# version: v0.17.8
# commit: unknown3. Edit the k0sctl.yaml
k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- ssh:
address: <k0s-master-1-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: controller
- ssh:
address: <k0s-master-2-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: controller
- ssh:
address: <k0s-master-3-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: controller
- ssh:
address: <k0s-worker-1-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
- ssh:
address: <k0s-worker-2-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
- ssh:
address: <k0s-worker-3-ip>
user: k0s
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
k0s:
version: v1.30.0+k0s.0
config:
spec:
api:
externalAddress: <k0s-loadbalancer-ip>
sans:
- <k0s-loadbalancer-ip>4. Install and configure HAProxy on the k0s-loadbalancer VM
> ssh [email protected]
k0s@k0s-loadbalancer:~$ sudo -i
root@k0s-loadbalancer:~# apt update && apt install haproxy -y
root@k0s-loadbalancer:~# vi /etc/haproxy/haproxy.cfgAdd the following lines to the end of the /etc/haproxy/haproxy.cfg
frontend kubeAPI
bind *:6443
mode tcp
option tcplog
default_backend kubeAPI_backend
frontend konnectivity
bind *:8132
mode tcp
option tcplog
default_backend konnectivity_backend
frontend controllerJoinAPI
bind *:9443
mode tcp
option tcplog
default_backend controllerJoinAPI_backend
backend kubeAPI_backend
mode tcp
option tcp-check
balance roundrobin
server k0s-controller1 <k0s-master-1-ip>:6443 check fall 3 rise 2
server k0s-controller2 <k0s-master-2-ip>:6443 check fall 3 rise 2
server k0s-controller3 <k0s-master-3-ip>:6443 check fall 3 rise 2
backend konnectivity_backend
mode tcp
option tcp-check
balance roundrobin
server k0s-controller1 <k0s-master-1-ip>:8132 check fall 3 rise 2
server k0s-controller2 <k0s-master-2-ip>:8132 check fall 3 rise 2
server k0s-controller3 <k0s-master-3-ip>:8132 check fall 3 rise 2
backend controllerJoinAPI_backend
mode tcp
option tcp-check
balance roundrobin
server k0s-controller1 <k0s-master-1-ip>:9443 check fall 3 rise 2
server k0s-controller2 <k0s-master-2-ip>:9443 check fall 3 rise 2
server k0s-controller3 <k0s-master-3-ip>:9443 check fall 3 rise 2
listen stats
bind *:9999
mode http
stats enable
stats uri /Restart HAProxy
root@k0s-loadbalancer:~# systemctl enable haproxy
root@k0s-loadbalancer:~# systemctl restart haproxy
root@k0s-loadbalancer:~# systemctl status haproxy.service5. Install k0s using k0sctl
> ssh k0s@<k0s-master-1-ip>
k0s@k0s-master-1:~$ exit
> ssh k0s@<k0s-master-2-ip>
k0s@k0s-master-2:~$ exit
> ssh k0s@<k0s-master-3-ip>
k0s@k0s-master-3:~$ exit
> ssh k0s@<k0s-worker-1-ip>
k0s@k0s-worker-1:~$ exit
> ssh k0s@<k0s-worker-2-ip>
k0s@k0s-worker-2:~$ exit
> ssh k0s@<k0s-worker-3-ip>
k0s@k0s-worker-3:~$ exit
> k0sctl apply -c k0sctl.yaml
> k0sctl kubeconfig > kubeconfig
> kubectl --kubeconfig kubeconfig get no -owide
> kubectl --kubeconfig kubeconfig top no6. Install Basic Kubernetes Components
Install Nginx Ingress Controller
> export KUBECONFIG=./kubeconfig
> kubectl get nodes
# Install ingress nginx
> helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
> helm repo update
> helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --set controller.hostNetwork=true,controller.service.type="",controller.kind=DaemonSet --namespace ingress-nginx --version 4.10.1 --create-namespace --debug
> kubectl -n ingress-nginx get all
> kubectl get IngressClassInstall Longhorn StorageClass
> export KUBECONFIG=./kubeconfig
# Install Longhorn sc
> helm repo add longhorn https://charts.longhorn.io
> helm repo update
> helm upgrade --install longhorn longhorn/longhorn --set persistence.defaultClassReplicaCount=1 --namespace longhorn-system --create-namespace --version 1.6.1 --debug
> kubectl -n longhorn-system get po
> kubectl get sc7. Install testing tools
minio-values.yaml
auth:
rootUser: "admin"
rootPassword: "password"
ingress:
enabled: true
ingressClassName: "nginx"
hostname: minio.lakehouse.local
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
defaultBuckets: "lakehouse, airflow, risingwave, kafka"
persistence:
size: 5GiInstall Minio on Kubernetes
# Install Minio
> helm repo add bitnami https://charts.bitnami.com/bitnami
> helm repo update
> helm upgrade --install minio -n minio -f minio-values.yaml bitnami/minio --create-namespace --debug
> kubectl -n minio get all -owide
> kubectl -n minio get pvc
> kubectl -n minio get ingUpdate local hosts file
> sudo vim /etc/hosts
# Add the following lines to the end of the /etc/hosts
# 10.59.145.225 minio.lakehouse.localAccess Minio at http://minio.lakehouse.local in your web browser with user: admin & pass: password
8. Destroy the cluster
> multipass list
> multipass delete --all --purge