Skip to content

Install Kubernetes (k8s) v1.23 Cluster on Ubuntu 18.04 LTS

1. Preparation

1.1 VMs (Vagrant)

4 VMs

Hostname vCPUs Memory Swap IP OS sudoer
master01 2 8G 0 192.168.56.200 CentOS 7 vagrant
minion01 2 8G 0 192.168.56.101 CentOS 7 vagrant
minion02 2 8G 0 192.168.56.102 CentOS 7 vagrant
minion03 2 8G 0 192.168.56.103 CentOS 7 vagrant
/etc/hosts
/etc/hosts
192.168.56.200 master01

192.168.56.101 minion01
192.168.56.102 minion02
192.168.56.103 minion03

192.168.10.200 k8s-cluster
$ vagrant status
Current machine states:

master01                  running (virtualbox)
minion01                  running (virtualbox)
minion02                  running (virtualbox)
minion03                  running (virtualbox)
Vagrantfile
Vagrantfile
Vagrant.configure("2") do |config|

  config.vm.provider :libvirt do |libvirt|
    libvirt.cpu_mode = "host-passthrough"
    libvirt.cpus = 2
    libvirt.disk_bus = "virtio"
    libvirt.disk_driver :cache => "writeback"
    libvirt.driver = "kvm"
    libvirt.memory = 8192
    libvirt.memorybacking :access, :mode => "shared"
    libvirt.nested = true
    libvirt.nic_model_type = "virtio"
    libvirt.storage :file, bus: "virtio", cache: "writeback"
    libvirt.video_type = "virtio"
  end

  config.vm.provider :virtualbox do |virtualbox|
    virtualbox.cpus = 2
    virtualbox.memory = 8192
    virtualbox.customize ["modifyvm", :id, "--cpu-profile", "host"]
    virtualbox.customize ["modifyvm", :id, "--nested-hw-virt", "on"]
  end


  config.vm.define "master01" do |master01|
    master01.vm.hostname = "master01"
    master01.vm.box = "hashicorp/bionic64"
    master01.vm.network "private_network", ip: "192.168.56.200"
    master01.vm.provider "virtualbox" do |vb|
      vb.customize ['modifyvm', :id, '--natnet1', '20.0.51.0/24']
    end
  end

  config.vm.define "minion01" do |minion01|
    minion01.vm.hostname = "minion01"
    minion01.vm.box = "hashicorp/bionic64"
    minion01.vm.network "private_network", ip: "192.168.56.101"
    minion01.vm.provider "virtualbox" do |vb|
      vb.customize ['modifyvm', :id, '--natnet1', '20.0.52.0/24']
    end
  end

  config.vm.define "minion02" do |minion02|
    minion02.vm.hostname = "minion02"
    minion02.vm.box = "hashicorp/bionic64"
    minion02.vm.network "private_network", ip: "192.168.56.102"
    minion02.vm.provider "virtualbox" do |vb|
      vb.customize ['modifyvm', :id, '--natnet1', '20.0.53.0/24']
    end
  end

  config.vm.define "minion03" do |minion03|
    minion03.vm.hostname = "minion03"
    minion03.vm.box = "hashicorp/bionic64"
    minion03.vm.network "private_network", ip: "192.168.56.103"
    minion03.vm.provider "virtualbox" do |vb|
      vb.customize ['modifyvm', :id, '--natnet1', '20.0.54.0/24']
    end
  end

end

Please refer to Introduction to Vagrant for more information about Vagrant.

1.2 SSH config

vagrant ssh-config | tee -a ~/.ssh/config

Create a script foreach-node.sh.

foreach-node.sh
foreach-node.sh
#/bin/sh
NODES="master01 minion01 minion02 minion03"
CMD="exec"

if [[ "$1" == "-h" ]]; then
  echo "$0 -n [NODES] -m [exec|script] [command_or_scriptfile]"
  exit 0
fi

while [[ $# > 0 ]]; do
  if [[ "$1" == "-n" ]]; then
    NODES="$2"
    shift 2
  elif [[ "$1" == "-m" ]]; then
    CMD="$2"
    shift 2
  else
    break
  fi
done

SUBCMD="$*" 

if [[ -z "$SUBCMD" ]]; then
  SUBCMD="echo OK"
fi

for node in $NODES; do
  if [[ "$CMD" == "exec" ]]; then
    echo -n "$node "
    ssh $node $SUBCMD
  elif [[ "$CMD" == "script" ]]; then
    echo "$node ..."
    if [[ ! -f $SUBCMD ]]; then
      echo "$SUBCMD not exists"
      exit 1
    fi  
    scp $SUBCMD $node:/tmp/a.sh
    ssh $node "chmod +x /tmp/a.sh && sudo /tmp/a.sh" 
  fi
done
chmod +x foreach-node.sh
# have a check
./foreach-node.sh

1.3 DNS via hosts file

cat <<EOF > /tmp/add-hosts.sh
cat <<EEE >> /etc/hosts 
192.168.56.200 master01

192.168.56.101 minion01
192.168.56.102 minion02
192.168.56.103 minion03

192.168.56.200 k8s-cluster
EEE
EOF

./foreach-node.sh "sudo sed -ie s/127.0.2.1/#127.0.2.1/ /etc/hosts"
./foreach-node.sh -m script /tmp/add-hosts.sh

2. Basic configurations for servers

2.1 Check firewall configuration

$ ./foreach-node.sh sudo ufw status
master01 Status: inactive
minion01 Status: inactive
minion02 Status: inactive
minion03 Status: inactive

2.2 Set SELinux in permissive mode (effectively disabling it)

$ ./foreach-node.sh 'sudo sestatus' 
master01 sudo: sestatus: command not found
minion01 sudo: sestatus: command not found
minion02 sudo: sestatus: command not found
minion03 sudo: sestatus: command not found

2.3 Disable swap

You MUST disable swap in order for the kubelet to work properly.

./foreach-node.sh sudo swapoff -a
./foreach-node.sh 'sudo sed -ie "s:/dev/disk/by-uuid/\(.*\) none swap sw 0 0:#/dev/disk/by-uuid/\1 none swap sw 0 0:" /etc/fstab'

2.4 Verify the MAC address and product_uuid are unique for every node

./foreach-node.sh 'ip link | grep link/ether'
./foreach-node.sh sudo cat /sys/class/dmi/id/product_uuid

2.5 Letting iptables see bridged traffic

# Make sure that the br_netfilter module is loaded
./foreach-node.sh 'lsmod | grep br_netfilter'

# To load it explicitly
./foreach-node.sh 'sudo modprobe br_netfilter'

# Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EEE > /tmp/ensure-bridge-nf.sh
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
EEE

./foreach-node.sh -m script /tmp/ensure-bridge-nf.sh

2.6 Sysctl for KVM

cat <<EEE > /tmp/set-sysctl.sh
cat <<EOF | sudo tee /etc/sysctl.d/kvm.conf
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.proxy_arp = 1
EOF
sudo sysctl --system
EEE

./foreach-node.sh -m script /tmp/set-sysctl.sh

3. Install docker-ce as the container runtime

3.1 Install Docker Engine

cat <<EOF > /tmp/install-docker.sh
##Set up the repository
sudo apt-get update
sudo apt-get install -y \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=\$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  \$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


##Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
EOF

./foreach-node.sh -m script /tmp/install-docker.sh
cat <<EOF > /tmp/patch.sh
mkdir -p /etc/docker/
mkdir -p /etc/systemd/system/docker.service.d

cat > /etc/docker/daemon.json <<EEE
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EEE

# Restart docker.
systemctl daemon-reload
systemctl restart docker
EOF

./foreach-node.sh -m script /tmp/patch.sh

See: - https://github.com/schoolofdevops/kubernetes-labguide/issues/10 - https://docs.docker.com/engine/install/ubuntu/

3.2 Verify that Docker Engine is installed correctly

Running the hello-world image.

./foreach-node.sh sudo docker run hello-world

4. Install kubeadm, kubelet and kubectl

NOTE: Use your own url for apt-key.gpg and apt repository

cat <<EEE > /tmp/install-kubeadm.sh 
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg

# Add the Kubernetes apt repository
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet, kubeadm and kubectl, and pin their version
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
EEE

./foreach-node.sh -m script /tmp/install-kubeadm.sh

Check kubeadm version

$ ./foreach-node.sh 'kubeadm version | awk -F, "{print \$3}"' 
master01  GitVersion:"v1.23.6"
minion01  GitVersion:"v1.23.6"
minion02  GitVersion:"v1.23.6"
minion03  GitVersion:"v1.23.6"

See: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

5. Creating k8s cluster with kubeadm

Ping k8s-cluster

./foreach-node.sh ping k8s-cluster -c 1

5.1 Pull images

Pull the following images:

$ sudo kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

5.2 Init cluster on master01

For flannel pod network (see 5.4), the argument --pod-network-cidr=10.244.0.0/16 should be provided.

sudo kubeadm init \
  --control-plane-endpoint "k8s-cluster:6443" \
  --apiserver-advertise-address "192.168.56.200" \
  --pod-network-cidr=10.244.0.0/16
Output
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-cluster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 192.168.56.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.56.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.56.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.001770 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eu2pzh.mbow7h16pv2bnl4i
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-cluster:6443 --token eu2pzh.mbow7h16pv2bnl4i \
  --discovery-token-ca-cert-hash sha256:021b945cecea1f592acf107b27fc7aba098b58a4bba0295aa96118a13b81e19f \
  --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-cluster:6443 --token eu2pzh.mbow7h16pv2bnl4i \
  --discovery-token-ca-cert-hash sha256:021b945cecea1f592acf107b27fc7aba098b58a4bba0295aa96118a13b81e19f 

5.3 Create $HOME/.kube/config on master01 (or any server with kubectl installed)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

See also Install and Set Up kubectl on Linux

5.4 Deploy a pod network (flannel) to the cluster

On master01

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.17.0/Documentation/kube-flannel.yml

Check cluster status

kubectl get pod -A
kubectl get node

5.5 Join the control-plane node

(omit..)

Please refer to the output of kubeadm init .

See also Install Kubernetes (k8s) v1.22 Highly Available Clusters on CentOS 7

5.6 Join worker nodes

On minion01, minion02, minion03

# e.g.
./foreach-node.sh -n "minion01 minion02 minion03" \
  sudo kubeadm join k8s-cluster:6443 --token eu2pzh.mbow7h16pv2bnl4i \
    --discovery-token-ca-cert-hash sha256:021b945cecea1f592acf107b27fc7aba098b58a4bba0295aa96118a13b81e19f 
Output
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0425 13:33:32.424506   24882 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5.7 Check the cluster

On master01.k8s. Check the nodes

kubectl get nodes

Output:

NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   30m    v1.23.6
minion01   Ready    <none>                 109s   v1.23.6
minion02   Ready    <none>                 102s   v1.23.6
minion03   Ready    <none>                 95s    v1.23.6

Check the pods

kubectl get pod -A

Output:

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-64897985d-cts4b            1/1     Running   0          30m
kube-system   coredns-64897985d-fvqrg            1/1     Running   0          30m
kube-system   etcd-master01                      1/1     Running   1          31m
kube-system   kube-apiserver-master01            1/1     Running   1          31m
kube-system   kube-controller-manager-master01   1/1     Running   1          31m
kube-system   kube-flannel-ds-mm65t              1/1     Running   0          2m33s
kube-system   kube-flannel-ds-sf85z              1/1     Running   0          2m26s
kube-system   kube-flannel-ds-vr67h              1/1     Running   0          13m
kube-system   kube-flannel-ds-xg8qb              1/1     Running   0          2m40s
kube-system   kube-proxy-2xlqw                   1/1     Running   0          2m33s
kube-system   kube-proxy-66n5q                   1/1     Running   0          30m
kube-system   kube-proxy-vxv4c                   1/1     Running   0          2m40s
kube-system   kube-proxy-wvvgs                   1/1     Running   0          2m26s
kube-system   kube-scheduler-master01            1/1     Running   1          31m

Cluster Info

kubectl cluster-info

Output:

Kubernetes control plane is running at https://k8s-cluster:6443
CoreDNS is running at https://k8s-cluster:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5.8 "No route to host"

If you met "No route to host" in the k8s cluster, try to fix the route as below, and then redeploy the flannel pod network.

./foreach-node.sh -n master01 \
'sudo ip route del default ; sudo ip route add default via 192.168.56.200 dev eth1'

./foreach-node.sh -n minion01 \
'sudo ip route del default ; sudo ip route add default via 192.168.56.101 dev eth1'

./foreach-node.sh -n minion02 \
'sudo ip route del default ; sudo ip route add default via 192.168.56.102 dev eth1'

./foreach-node.sh -n minion03 \
'sudo ip route del default ; sudo ip route add default via 192.168.56.103 dev eth1'

./foreach-node.sh ip route

Reset the IP route.

./foreach-node.sh -n master01 \
'sudo ip route del default ; sudo ip route add default via 20.0.51.2 dev eth0 proto dhcp src 20.0.51.15 metric 100'

./foreach-node.sh -n minion01 \
'sudo ip route del default ; sudo ip route add default via 20.0.52.2 dev eth0 proto dhcp src 20.0.52.15 metric 100'

./foreach-node.sh -n minion02 \
'sudo ip route del default ; sudo ip route add default via 20.0.53.2 dev eth0 proto dhcp src 20.0.53.15 metric 100'

./foreach-node.sh -n minion03 \
'sudo ip route del default ; sudo ip route add default via 20.0.54.2 dev eth0 proto dhcp src 20.0.54.15 metric 100'

See also: https://serverfault.com/questions/123553/how-to-set-the-preferred-network-interface-in-linux

Reference