k8s_Installations

kubernetes Installation in ubuntu 20.04 for both Master & Nodes

Pre-requisites:

Configuration:

  1. Install Docker on all the nodes
    sudo apt update
    curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
    sudo usermod -aG docker ubuntu
    exit
    
  2. Install CRI-dockerd, below steps are specific to ubuntu 20.04 Refer Here
    wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd_0.3.16.3-0.ubuntu-focal_amd64.deb
    sudo dpkg -i cri-dockerd_0.3.16.3-0.ubuntu-focal_amd64.deb
    
  3. Install go by executing below commands on all the nodes
    sudo -i
    wget https://go.dev/dl/go1.23.4.linux-amd64.tar.gz && tar -C /usr/local -xzf go1.23.4.linux-amd64.tar.gz
    export PATH=$PATH:/usr/local/go/bin
    git clone https://github.com/Mirantis/cri-dockerd.git
    cd cri-dockerd
    mkdir bin
    go build -o bin/cri-dockerd
    mkdir -p /usr/local/bin
    install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
    cp -a packaging/systemd/* /etc/systemd/system
    sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
    systemctl daemon-reload
    systemctl enable cri-docker.service
    systemctl enable --now cri-docker.socket
    exit
    exit
    
  4. Next, install the following components kubeadm, kubelet and kubectl on all the nodes in the cluster being a root user
    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
    curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
    
    echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
    exit
    exit
    
  5. Relogin & initialize the cluster using the following command as a root user making one node Master/control-plane to create a cluster, login into a Master node and execute the following
    sudo -i
    kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
    
OUTPUT:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.25.16:6443 --token sz14lp.jwkx2vy49w54fk79 \
        --discovery-token-ca-cert-hash sha256:25fe0576979b9306d911139f22c47f02240ab63731619a745b0e396ddf9fbe46
  1. On the master node, to run kubectl as a normal user, execute the following commands as non-root user
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  2. Check for all the available nodes
    kubectl get nodes
    
  3. kuberentes needs CNI Plugin so that pod-network is enabled. Untill this is done, the DNS doesn’t work, services donot work so the status of the nodes shows ‘NotReady’. Install any CNI implementation (Flannel)

  4. For k8s networking, the reference specification is CNI. There are many vendors who implement this Addons Installations. We are going to use flannel Let’s execute the following on master node being a normal user
    kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
    
  5. Check if the status of all the nodes including master is ready
    kubectl get nodes -w 
    
  6. Autocomplete kubectl commands in command line using kubectl Cheat Sheet
    source <(kubectl completion bash)
    echo "source <(kubectl completion bash)" >> ~/.bashrc
    
  7. After the successful installation of flannel on Master/control-plane, execute the following command on all other nodes as a root user allowing them to join the cluster
    kubeadm join 172.31.25.16:6443 --token sz14lp.jwkx2vy49w54fk79 \
        --discovery-token-ca-cert-hash sha256:25fe0576979b9306d911139f22c47f02240ab63731619a745b0e396ddf9fbe46 --cri-socket "unix:///var/run/cri-dockerd.sock"
    

    Issue ~

    • If you are getting the following error while joining the nodes to the cluster
      [preflight] Running pre-flight checks
      error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
      [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
      To see the stack trace of this error execute with --v=5 or higher
      

Resolution ~

kubeadm complains about bridge-nf-call and ip_forward if not using docker runtime #1062

Some useful commands –

kubectl get no
kubectl get nodes
kubectl get nodes -o wide
kubectl api-resources
kubectl api-resources | grep pod
kubectl get pods -A w
kubectl apply -f xyz.yaml
kubectl get pods -o wide
kubectl get pods <pod-name> -o yaml
kubectl describe pods <pod-name>
kubectl delete -f xyz.yaml
kubectl delete pods <pod-name>
kubectl get po -n kube-system -w
kubectl logs <pod-name>

References ~

Kubernetes Networking Model

Imperative Commands