Setup Multi Master Kubernetes Cluster with Kubeadm

kubernetes multi master

Requirements:

  • kubernetes version 1.15.11
  • haproxy
  • docker

Node list:

  • ha-balancer 10.10.10.100
  • ha-master1 10.10.10.10
  • ha-master2 10.10.10.11
  • ha-master3 10.10.10.12
  • ha-worker1 10.10.10.20
  • ha-worker2 10.10.10.21

On all node execute this command

sudo apt update; sudo apt autoremove -y

Install docker package on ha-master1, ha-master2, ha-master3, ha-node1, ha-node2

sudo apt install -y docker.io=18.09.7-0ubuntu1~18.04.4
cat > /etc/docker/daemon.json <<EOF{  "exec-opts": ["native.cgroupdriver=systemd"]}EOF
mkdir -p /etc/systemd/system/docker.service.d

Restart docker

systemctl daemon-reloadsystemctl restart docker

Install kubectl, kubelet & kubeadm on all node master, ha-master1, ha-master2, ha-master3

sudo apt install -y apt-transport-https; curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.listdeb <http://apt.kubernetes.io/> kubernetes-xenial mainEOF
sudo apt update; sudo apt install -y kubelet=1.15.11-00 kubeadm=1.15.11-00 kubectl=1.15.11-00

hold kubelet, kubeadm and kubectl

sudo apt-mark hold kubelet kubeadm kubectlsudo systemctl enable kubelet

disable swap on master node and worker node

sudo swapon -sswappoff -a
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf.bridge.bridge-nf-call-ip6tables = 1.bridge.bridge-nf-call-iptables = 1EOF
sudo sysctl --systemsudo modprobe br_netfilter

Install and configure haproxy on node ha-balancer

sudo apt update; sudo apt install haproxy -y
sudo vim /etc/haproxy/haproxy.cfg
...frontend kubernetesbind 10.10.10.100:6443option tcplogmode tcpdefault_backend kubernetes-master-nodesbackend kubernetes-master-nodesmode tcpbalance roundrobinoption tcp-checkserver ha-master1 10.10.10.10:6443 check fall 3 rise 2server ha-master2 10.10.10.11:6443 check fall 3 rise 2server ha-master3 10.10.10.12:6443 check fall 3 rise 2frontend api_server_kubernetesbind 10.10.10.100:8080option tcplogmode tcpdefault_backend kube_api_server_kubernetesbackend kube_api_server_kubernetesmode tcpbalance roundrobinoption tcp-checkserver ha-master1 10.10.10.10:8080 check fall 3 rise 2server ha-master2 10.10.10.10:8080 check fall 3 rise 2server ha-master3 10.10.10.10:8080 check fall 3 rise 2...

verification haproxy configuration

haproxy -c -V -f /etc/haproxy/haproxy.cfg

restart haproxy service

sudo systemctl restart haproxy

Generate SSH Key on ha-master1 and then copy to other master node

sudo -issh-keygencat /root/.ssh/id_rsa.pub

copy ssh key to other master

ssh-copy-id -i /root.ssh/id_rsa root@ha-master2ssh-copy-id -i /root.ssh/id_rsa root@ha-master3

test ssh to ha-master2 and ha-master3

ssh 10.10.10.11ssh 10.10.10.12

verification connection to ha-balancer

nc -v 10.10.10.100 6443nc -v 10.10.10.100 8080

initialitation on ha-master1

vi config.yaml...apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.15.11controlPlaneEndpoint: "10.10.10.100:6443"...
kubeadm init --config=config.yaml --upload-certs

the results as below

You can now join any number of the control-plane node running the following command on each as root:  kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\    --discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242 \\    --control-plane --certificate-key 6cd223990b20aefad2c394f3217ef9cc10c8625d33f3a8b91bf7da8cad5db74aPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\    --discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy network CNI, this tutorial we willl use weave network

kubectl apply -f "<https://cloud.weave.works/k8s/net?k8s-version=$>(kubectl version | base64 | tr -d '\\n')"

on ha-master2 and ha-master3 join cluster to ha-master1

kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\    --discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242 \\    --control-plane --certificate-key 6cd223990b20aefad2c394f3217ef9cc10c8625d33f3a8b91bf7da8cad5db74a

on ha-worker1 and ha-worker2 join cluster as worker node

kubeadm join 10.10.10.100:6443 --token 71qkw2.ardnuukvwlvhugbt \\    --discovery-token-ca-cert-hash sha256:a8fad41061a6fb20207ebc3fabb5da65cf5dc397ef97c39ce6dc8f62863e5242

verification nodes ready

kubectl get nodesNAME         STATUS   ROLES    AGE    VERSIONha-master1   Ready    master   5d3h   v1.15.11ha-master2   Ready    master   5d3h   v1.15.11ha-master3   Ready    master   5d3h   v1.15.11ha-worker1   Ready    <none>   5d3h   v1.15.11ha-worker2   Ready    <none>   5d3h   v1.15.11

verification all pods running

kubectl get pods -n kube-systemNAME                                 READY   STATUS    RESTARTS   AGEcoredns-66bff467f8-ffhz9             1/1     Running   0          5d3hcoredns-66bff467f8-w2lcw             1/1     Running   0          5d3hetcd-ha-master1                      1/1     Running   0          5d3hetcd-ha-master2                      1/1     Running   0          5d3hetcd-ha-master3                      1/1     Running   0          5d3hkube-apiserver-ha-master1            1/1     Running   0          5d3hkube-apiserver-ha-master2            1/1     Running   0          5d3hkube-apiserver-ha-master3            1/1     Running   0          5d3hkube-controller-manager-ha-master1   1/1     Running   1          5d3hkube-controller-manager-ha-master2   1/1     Running   0          5d3hkube-controller-manager-ha-master3   1/1     Running   1          5d3hkube-proxy-245hd                     1/1     Running   0          5d3hkube-proxy-4ckq2                     1/1     Running   0          5d3hkube-proxy-m62hj                     1/1     Running   0          5d3hkube-proxy-rpl5t                     1/1     Running   0          5d3hkube-scheduler-ha-master1            1/1     Running   2          5d3hkube-scheduler-ha-master2            1/1     Running   0          5d3hkube-scheduler-ha-master3            1/1     Running   0          5d3hweave-net-4lkbs                      2/2     Running   2          5d3hweave-net-526gt                      2/2     Running   2          5d3hweave-net-bxvkk                      2/2     Running   0          5d3hweave-net-ts2m2                      2/2     Running   0          5d3hweave-net-bgsw4                      2/2     Running   0          5d3h

check cluster info

# kubectl cluster-infoKubernetes master is running at <https://10.10.10.100:6443>KubeDNS is running at <https://10.10.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
root@ha-master1:~# kubectl get endpoints kube-scheduler -n kube-system -o yamlapiVersion: v1kind: Endpointsmetadata:  annotations:    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"ha-master2_327fadbe-ec94-43bb-8076-9a1aafa57cd3","leaseDurationSeconds":15,"acquireTime":"2020-10-04T06:25:34Z","renewTime":"2020-10-04T11:06:30Z","leaderTransitions":2}'  creationTimestamp: "2020-09-29T07:05:42Z"  managedFields:  - apiVersion: v1    fieldsType: FieldsV1    fieldsV1:      f:metadata:        f:annotations:          .: {}          f:control-plane.alpha.kubernetes.io/leader: {}    manager: kube-scheduler    operation: Update    time: "2020-10-04T11:06:30Z"  name: kube-scheduler  namespace: kube-system  resourceVersion: "1380211"  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler  uid: 5c15a8ad-a5fa-4d17-8416-0c97aff0cfe9

create example pod

root@ha-master1:~# kubectl run nginx --image=nginxpod/nginx created
root@ha-master1:~# kubectl get podsNAME    READY   STATUS    RESTARTS   AGEnginx   1/1     Running   0          18s

Thanks.

Share:

Related Posts: