Install Kubernetes (k8s) v1.22 Highly Available Clusters on CentOS 7
1. Preparation¶
1.1 KVMs¶
8 KVMs
Hostname | vCPUs | Memory | Swap | IP | OS | sudoer |
---|---|---|---|---|---|---|
master01.k8s | 2 | 2G | 0 | 192.168.10.191 | CentOS 7 | echo |
master02.k8s | 2 | 2G | 0 | 192.168.10.192 | CentOS 7 | echo |
master03.k8s | 2 | 2G | 0 | 192.168.10.193 | CentOS 7 | echo |
minion01.k8s | 2 | 8G | 0 | 192.168.10.101 | CentOS 7 | echo |
minion02.k8s | 2 | 8G | 0 | 192.168.10.102 | CentOS 7 | echo |
minion03.k8s | 2 | 8G | 0 | 192.168.10.103 | CentOS 7 | echo |
slb01.k8s | 2 | 2G | 0 | 192.168.10.201 | CentOS 7 | echo |
slb02.k8s | 2 | 2G | 0 | 192.168.10.202 | CentOS 7 | echo |
cluster.k8s | - | - | - | 192.168.10.200 | - | - |
NOTE: 192.168.10.200 is a virtual (HA) IP which points to slb01 or slb02.
1.2 Install Ansible on Host machine¶
1.3 Password free configuration¶
cat <<EOF | sudo tee -a /etc/ansible/hosts
[k8s]
master01.k8s
master02.k8s
master03.k8s
minion01.k8s
minion02.k8s
minion03.k8s
[k8s-master]
master01.k8s
master02.k8s
master03.k8s
[k8s-minion]
minion01.k8s
minion02.k8s
minion03.k8s
EOF
# Have a test
ansible k8s --ask-pass -u echo -m shell -a 'echo ok'
# Password free for login
ansible k8s --ask-pass -u echo -m file -a "path=.ssh state=directory mode=700"
ansible k8s --ask-pass -u echo -m copy -a "src=~/.ssh/id_rsa.pub dest=.ssh/authorized_keys mode=600"
# Password free for sudo
ansible k8s --ask-pass -u echo -b -m shell -a "echo 'echo ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers"
# Now we can have a test as below
ansible k8s -u echo -m shell -a 'echo ok'
1.4 DNS via hosts file¶
cat <<EOF > /tmp/add-hosts.sh
cat <<EEE >> /etc/hosts
192.168.10.191 master01.k8s
192.168.10.192 master02.k8s
192.168.10.193 master03.k8s
192.168.10.101 minion01.k8s
192.168.10.102 minion02.k8s
192.168.10.103 minion03.k8s
192.168.10.201 slb01.k8s
192.168.10.202 slb02.k8s
# A virtual IP which points slb01 or slb02
192.168.10.200 cluster.k8s
EEE
EOF
ansible k8s -u echo -b -m script -a "/tmp/add-hosts.sh"
2. Basic configurations for servers¶
2.1 Firewall configuration (required ports)¶
NOTE: eth0
, 192.168.10.0/24
may be changed according to your own environment.
cat <<EOF > /tmp/required-ports.sh
# Possible other interfaces instead of eth0
firewall-cmd --permanent --zone=trusted --add-interface=eth0
# Your internal network
firewall-cmd --permanent --zone=trusted --add-source=192.168.10.0/24
# Flannel
firewall-cmd --permanent --zone=trusted --add-interface=flannel.1
# POD cidr
firewall-cmd --permanent --zone=trusted --add-source=10.244.0.0/16
# Service cidr
firewall-cmd --permanent --zone=trusted --add-source=10.96.0.0/16
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250-10252/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --reload
EOF
ansible k8s -u echo -b -m script -a '/tmp/required-ports.sh'
2.2 Set SELinux in permissive mode (effectively disabling it)¶
cat <<EOF > /tmp/set-selinux.sh
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
EOF
ansible k8s -u echo -b -m script -a '/tmp/set-selinux.sh
2.3 Disable swap¶
You MUST disable swap in order for the kubelet to work properly.
ansible k8s -u echo -b -m shell -a 'swapoff -a'
ansible k8s -u echo -b -m shell -a 'sed -i -e "s:/dev/mapper/centos-swap:#/dev/mapper/centos-swap:" /etc/fstab'
2.4 Verify the MAC address and product_uuid are unique for every node¶
ansible k8s -u echo -m shell -a '/usr/sbin/ip link | grep link/ether'
ansible k8s -u echo -b -m shell -a 'cat /sys/class/dmi/id/product_uuid'
2.5 Letting iptables see bridged traffic¶
# Make sure that the br_netfilter module is loaded
ansible k8s -u echo -m shell -a '/usr/sbin/lsmod | grep br_netfilter'
# To load it explicitly
ansible k8s -u echo -b -m shell -a 'modprobe br_netfilter'
# Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EEE > /tmp/ensure-bridge-nf.sh
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
EEE
ansible k8s -u echo -b -m script -a '/tmp/ensure-bridge-nf.sh'
2.6 Sysctl for KVM¶
cat <<EEE > /tmp/set-sysctl.sh
cat <<EOF | sudo tee /etc/sysctl.d/kvm.conf
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.proxy_arp = 1
EOF
sudo sysctl --system
EEE
ansible k8s -u echo -b -m script -a '/tmp/set-sysctl.sh'
3. Install docker-ce as the container runtime¶
3.1 Install container-selinux¶
3.2 Check docker-ce version¶
3.3 Install docker-ce¶
NOTE: Use your mirror of dockerhub
cat <<EEE > /tmp/install-docker.sh
# Install Docker CE 18.06 from Docker's CentOS repositories:
## Install prerequisites.
yum -y install yum-utils device-mapper-persistent-data lvm2
## Add docker repository.
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
## Install docker.
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker
# Setup daemon.
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
# NOTE: Use your mirror of dockerhub
"registry-mirrors": ["https://z7sy3gme.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart docker.
systemctl daemon-reload
systemctl restart docker
EEE
ansible k8s -u echo -b -m script -a '/tmp/install-docker.sh'
4. Install HAProxy + Keepalived as the Load Balancer¶
See Install Haproxy v1.8 and Keepalived on CentOS 7 for the installation.
Config HAProxy, add a listen for k8sapi
cat << EEE > /tmp/k8sapi-proxy.sh
cat << EOF | sudo tee -a /etc/opt/rh/rh-haproxy18/haproxy/haproxy.cfg
listen k8sapi
bind *:6443
mode tcp
balance roundrobin
option tcp-check
server master01 master01.k8s:6443 check
server master02 master02.k8s:6443 check
server master03 master03.k8s:6443 check
EOF
EEE
ansible slb -u echo -b -m script -a '/tmp/k8sapi-proxy.sh'
Restart HAProxy
5. Install kubeadm, kubelet and kubectl¶
NOTE: Use your own baseurl
and gpgkey
cat <<EEE > /tmp/install-kubeadm.sh
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
EEE
ansible k8s -u echo -b -m script -a '/tmp/install-kubeadm.sh'
Check kubeadm version
6. Creating HA clusters with kubeadm¶
Ping cluster.k8s
6.1 Init cluster on master01.k8s¶
For flannel pod network (see 6.3), the argument --pod-network-cidr=10.244.0.0/16
should be provided.
sudo kubeadm init --control-plane-endpoint "cluster.k8s:6443" --upload-certs --pod-network-cidr=10.244.0.0/16
Output:
[init] Using Kubernetes version: v1.22.1
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join cluster.k8s:6443 --token aswx9l.y6nmyeh3u3urvmgt \
--discovery-token-ca-cert-hash sha256:246676b26c3215a836ea4b55d61437edd62cc7a3a6fceca8c4cb40070fad9d88 \
--control-plane --certificate-key 5f6b55ca0aef4b76322dec1076313e0cb0ba19e1c3f1254cf68af3d3c8fabad6
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster.k8s:6443 --token aswx9l.y6nmyeh3u3urvmgt \
--discovery-token-ca-cert-hash sha256:246676b26c3215a836ea4b55d61437edd62cc7a3a6fceca8c4cb40070fad9d88
6.2 Create $HOME/.kube/config
on master01.k8s¶
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6.3 Deploy a pod network (flannel) to the cluster¶
On master01.k8s
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
6.4 Join the control-plane node¶
On master02.k8s, master03.k8s
# e.g.
sudo kubeadm join cluster.k8s:6443 --token aswx9l.y6nmyeh3u3urvmgt \
--discovery-token-ca-cert-hash sha256:246676b26c3215a836ea4b55d61437edd62cc7a3a6fceca8c4cb40070fad9d88 \
--control-plane --certificate-key 5f6b55ca0aef4b76322dec1076313e0cb0ba19e1c3f1254cf68af3d3c8fabad6
Output:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
......
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
6.5 Join worker nodes¶
On minion01.k8s, minion02.k8s, minion03.k8s
# e.g.
sudo kubeadm join cluster.k8s:6443 --token aswx9l.y6nmyeh3u3urvmgt \
--discovery-token-ca-cert-hash sha256:246676b26c3215a836ea4b55d61437edd62cc7a3a6fceca8c4cb40070fad9d88
Output:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
6.6 Check the cluster¶
On master01.k8s. Check the nodes
Output:
NAME STATUS ROLES AGE VERSION
master01.k8s Ready control-plane,master 3h52m v1.22.1
master02.k8s Ready control-plane,master 28m v1.22.1
master03.k8s Ready control-plane,master 4m35s v1.22.1
minion01.k8s Ready <none> 39s v1.22.1
minion02.k8s Ready <none> 75s v1.22.1
minion03.k8s Ready <none> 2m51s v1.22.1
Check the pods
Output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-79fxq 1/1 Running 0 33m
kube-system coredns-78fcd69978-x65cm 1/1 Running 0 33m
kube-system etcd-master01.k8s 1/1 Running 2 33m
kube-system etcd-master02.k8s 1/1 Running 7 30m
kube-system etcd-master03.k8s 1/1 Running 0 6m24s
kube-system kube-apiserver-master01.k8s 1/1 Running 3 33m
kube-system kube-apiserver-master02.k8s 1/1 Running 7 30m
kube-system kube-apiserver-master03.k8s 1/1 Running 3 30m
kube-system kube-controller-manager-master01.k8s 1/1 Running 1 (7m19s ago) 33m
kube-system kube-controller-manager-master02.k8s 1/1 Running 1 30m
kube-system kube-controller-manager-master03.k8s 1/1 Running 1 30m
kube-system kube-flannel-ds-88wbk 1/1 Running 0 4m19s
kube-system kube-flannel-ds-fznsw 1/1 Running 0 4m56s
kube-system kube-flannel-ds-kln22 1/1 Running 0 3m39s
kube-system kube-flannel-ds-n2hfp 1/1 Running 0 18m
kube-system kube-flannel-ds-n2qc4 1/1 Running 0 18m
kube-system kube-flannel-ds-qhh5q 1/1 Running 0 18m
kube-system kube-proxy-9hdxz 1/1 Running 1 30m
kube-system kube-proxy-cptvt 1/1 Running 0 3m39s
kube-system kube-proxy-hw4wm 1/1 Running 0 4m18s
kube-system kube-proxy-rcj7c 1/1 Running 0 4m56s
kube-system kube-proxy-tbq7p 1/1 Running 1 32m
kube-system kube-proxy-vmrqc 1/1 Running 0 33m
kube-system kube-scheduler-master01.k8s 1/1 Running 5 (7m19s ago) 33m
kube-system kube-scheduler-master02.k8s 1/1 Running 3 30m
kube-system kube-scheduler-master03.k8s 1/1 Running 2 30m
Cluster Info
Output:
Kubernetes control plane is running at https://cluster.k8s:6443
CoreDNS is running at https://cluster.k8s:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Reference¶
- docs/setup/production-environment/tools/kubeadm/install-kubeadm
- docs/setup/production-environment/tools/kubeadm/high-availability
- docs/ha-considerations.md
- docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
- docs/concepts/cluster-administration/networking/
- github.com/flannel-io/flannel
- Pods cannot resolve DNS