Last chance! 7 days left!

Click for a free read!

[Kubernetes Data Platform][Part 2.4]: Creating highly available Kubernetes cluster with k3s

Viet_1846

6 min readAug 3, 2024

This guide outlines deploying a highly available (HA) Kubernetes cluster using k3s on virtual machine-like containers. To achieve high availability, we leverage a software component called kube-vip. Kube-vip acts as both the cluster’s virtual IP address and the load balancer. This eliminates the need for additional external hardware or software, streamlining deployment and management.

k3s is a lightweight, production-grade Kubernetes distribution designed for resource-constrained environments like remote locations, IoT appliances, or unattended deployments. Packaged as a single binary under 70MB, k3s simplifies installation, operation, and automatic updates for Kubernetes clusters. It supports both ARM64 and ARMv7 architectures.

The architecture is as follows:

K3s HA cluster

INSTALLATION STEPS

1. Creating Virtual Machine-like Containers:

  • We’ll use docker-compose and sysbox to create seven VM-like containers: Master Nodes (IPs: 172.25.1.3, 172.25.1.4, 172.25.1.5), Worker Nodes (IPs: 172.25.1.6, 172.25.1.7, 172.25.1.8)
  • Detailed instructions are provided in the docker-compose.yaml file.

2. Preparing the First Master Node (Master-1):

  • This step involves installing the first k3s server on the Master-1 container.
  • Config kube-vip (IPs: 172.25.1.2)

3. Configuring Remaining Master Nodes (Master-2 & Master-3):

  • Install the k3s server on both Master-2 and Master-3 containers.

4. Preparing Worker Nodes (Worker-1, Worker-2, Worker-3):

  • Install the k3s agent on each Worker container.

5. Installing Basic Kubernetes Components:

  • Nginx Ingress Controller
  • Storage Provisioner
  • Metrics Server

6. Installing Testing Tools:

  • Tools to verify cluster functionality will be installed.

7. Destroying the Cluster

  • Instructions on how to destroy the deployed cluster will be provided.

HANDS-ON STEP

Reference Repository: https://github.com/viethqb/data-platform-notes/tree/main/kubernetes/k3s

IMPORTANT: Convention for Bash Scripts

Throughout this series, the following convention applies to Bash scripts:

Commands in the form “> command” should be executed on the local laptop.

Commands in the form “root@kworker1 command” should be executed on the kworker1 VM/Container as the root user.

1. Creating Virtual Machine-like Containers

docker-compose.yaml

services:
loadbalancer:
hostname: loadbalancer
container_name: loadbalancer
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.2
master-1:
hostname: master-1
container_name: master-1
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.3
master-2:
hostname: master-2
container_name: master-2
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.4
master-3:
hostname: master-3
container_name: master-3
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.5
worker-1:
hostname: worker-1
container_name: worker-1
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.6
worker-2:
hostname: worker-2
container_name: worker-2
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.7
worker-3:
hostname: worker-3
container_name: worker-3
build:
context: .
dockerfile: Dockerfile
runtime: sysbox-runc
networks:
k3s_network:
ipv4_address: 172.25.1.8
networks:
k3s_network:
driver: bridge
ipam:
config:
- subnet: 172.25.1.0/24
gateway: 172.25.1.1

Dockerfile

FROM ghcr.io/nestybox/ubuntu-jammy-systemd:latest

# Install Sshd
RUN apt-get update \
&& apt-get install --no-install-recommends -y openssh-server uuid-runtime\
apt-transport-https \
bash-completion vim less man jq bc \
lsof tree psmisc htop lshw sysstat dstat \
iproute2 iputils-ping iptables dnsutils traceroute \
netcat curl wget nmap socat netcat-openbsd rsync \
p7zip-full \
git tig \
binutils acl pv \
strace tcpdump \
open-iscsi nfs-common \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir /home/admin/.ssh \
&& chown admin:admin /home/admin/.ssh

# Enable sudo without password
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# Enable ssh password authentication
RUN sed -i 's/^PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config
RUN echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config

# Set Root password
RUN echo "admin\nadmin" | passwd root

EXPOSE 22

# Set systemd as entrypoint.
ENTRYPOINT ["/sbin/init", "--log-level=err"]

Creating Virtual Machine-like Containers

> cd  ~/Documents 
> git clone https://github.com/viethqb/data-platform-notes.git
> cd data-platform-notes/kubernetes/k3s

> docker-compose up -d
> docker ps -q | xargs -n 1 docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{ .Name }}' | sed 's/ \// /'
#ssh user: admin(root)/admin

> ssh-copy-id [email protected]
> ssh-copy-id [email protected]
> ssh-copy-id [email protected]
> ssh-copy-id [email protected]
> ssh-copy-id [email protected]
> ssh-copy-id [email protected]

> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'
> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'
> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'
> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'
> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'
> ssh [email protected] 'echo $(uuidgen) > /etc/machine-id'

2. Preparing the First Master Node (Master-1)

root@master-1:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - server \
--cluster-init \
--disable traefik --disable servicelb \
--tls-san=172.25.1.2

# Install kube-vip
root@master-1:~# kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
root@master-1:~# export VIP=172.25.1.2
root@master-1:~# ip a
# Find the network interface corresponding to IP 172.25.1.3
root@master-1:~# export INTERFACE=eth0
root@master-1:~# crictl -r "unix:///run/k3s/containerd/containerd.sock" pull ghcr.io/kube-vip/kube-vip:latest
root@master-1:~# CONTAINERD_ADDRESS=/run/k3s/containerd/containerd.sock ctr -n k8s.io run \
--rm \
--net-host \
ghcr.io/kube-vip/kube-vip:latest vip /kube-vip manifest daemonset --arp --interface $INTERFACE --address $VIP --controlplane --leaderElection --taint --services --inCluster | tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml

root@master-1:~# kubectl get ds -n kube-system kube-vip-ds

3. Configuring Remaining Master Nodes (Master-2 & Master-3)

root@master-2:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - server \
--server https://172.25.1.3:6443 \
--disable traefik --disable servicelb \
--tls-san=172.25.1.2


root@master-3:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - server \
--server https://172.25.1.3:6443 \
--disable traefik --disable servicelb \
--tls-san=172.25.1.2

4. Adding Worker Nodes (Worker-1, Worker-2, Worker-3)

root@worker-1:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - agent --server https://172.25.1.2:6443
root@worker-2:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - agent --server https://172.25.1.2:6443
root@worker-3:~# curl -sfL https://get.k3s.io | K3S_TOKEN=DC87A250BCBA499994CF808CEADD1BCC sh -s - agent --server https://172.25.1.2:6443

5. Installing Basic Kubernetes Components

Since k3s comes pre-installed with storage class and metric servers, we only need to install the Nginx Ingress Controller.

Install the Nginx Ingress Controller

#Install Helm cli
root@master-1:~# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
root@master-1:~# bash ./get_helm.sh

# Install ingress nginx
root@master-1:~# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
root@master-1:~# helm repo update
root@master-1:~# export KUBECONFIG=/var/lib/rancher/k3s/server/cred/admin.kubeconfig
root@master-1:~# helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --set controller.hostNetwork=true,controller.service.type="",controller.kind=DaemonSet --namespace ingress-nginx --version 4.10.1 --create-namespace --debug
root@master-1:~# kubectl -n ingress-nginx get all
root@master-1:~# kubectl get IngressClass

6. Install testing tools

minio-values.yaml


auth:
rootUser: "admin"
rootPassword: "password"
ingress:
enabled: true
ingressClassName: "nginx"
hostname: minio.lakehouse.local
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
defaultBuckets: "lakehouse, airflow, risingwave, kafka"
persistence:
size: 50Gi

Install Minio on Kubernetes


# Install Minio
> helm repo add bitnami https://charts.bitnami.com/bitnami
> helm repo update
> helm upgrade --install minio -n minio -f minio-values.yaml bitnami/minio --create-namespace --debug

> kubectl -n minio get all -owide
> kubectl -n minio get pvc
> kubectl -n minio get ing

# Update local hosts file
> sudo vim /etc/hosts
# 172.25.1.6 minio.lakehouse.local


# Access Minio at http://minio.lakehouse.local in your web browser.
# user/pass: admin/password

Conclusion

We’ve concluded our exploration of deploying Kubernetes in production environments. Throughout this series, we’ve delved into various aspects including:

  • Deployment methods: kubeadm, k3s, k0s, and rke2.
  • Load balancing: HAProxy and kube-vip.
  • Virtualization: Vagrant + VirtualBox, Multipass, and Docker + sysbox runtime.

It’s worth noting that in real-world settings, you’ll often receive pre-configured virtual machines from system administrators or DevOps teams.

We’ve also covered essential Kubernetes components such as:

  • Nginx Ingress Controller: For exposing HTTP services.
  • Storage classes: Longhorn and local-path-provisioner.
  • Metric Server: For monitoring cluster metrics.
  • Minio: As a practical demonstration of Kubernetes functionality.

As we’ve established, our primary goal in this series is to establish a robust Data Platform. Therefore, to align with this objective, we’ll shift our focus in the next installment to deploying data services. Stay tuned for the upcoming chapter!