CopyDisable

Wednesday, 3 August 2022

Recovering DigitalOcean droplet landing on grub-rescue prompt

 One night we rebooted one of our DigitalOcean Ubuntu 18.04 droplet (VM) and after starting, the VM was giving an error and directly going to the grub rescue prompt.

The error was displayed as 

error: file /boot/grub/i386-pc/normal.mod not found

grub rescue> 

We used the ls command and it will show all the disk devices or partitions which were connected to our VM.

grub rescue> ls

(hd0) (hd0,gpt15) (hd0,gpt14) (hd0,gpt1) (hd1) (hd2)


grub rescue> ls (hd0,gpt1)/


We can see from the above output that our /boot directory is missing. 

As the /boot folder is missing or probably got deleted by mistake, we tried to restore the droplet backup also. But unluckily in available droplet backups also the /boot folder was missing and the droplet won't start.

So we decided to go with DigitalOcean recovery option. We stopped our VM and took a snapshot and proceeded to boot from Recovery ISO.

1) Go to the Recovery link in DigitalOcean console, after the VM is shut down. Select the Boot from Recovery ISO option. 











Turn the VM on and go to the recovery console.

















Click on Launch Recovery Console.

2) Once you are in the recovery console, choose option 1. Mount Your Disk Image. This will mount our droplet's root volume.

3) Then choose option 6 to go to the Interactive Shell.

4) In the interactive shell, execute the below commands:

  a.   mount -o bind /dev /mnt/dev
b. mount -o bind /dev/pts /mnt/dev/pts
 c. mount -o bind /proc /mnt/proc
d. mount -o bind /sys /mnt/sys
e. mount -o bind /run /mnt/run

5) Change root for your mounted disk and go to droplet’s root directory.

chroot /mnt

6) Create the GRUB config file using the command: /usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg

7) Our droplet’s disk is /dev/vda and we will install GRUB on this disk /usr/sbin/grub-install /dev/vda

8) At this point, we can exit from the chrooted environment.
exit

9) Shutdown the VM and turn the VM on from the VM's hard drive. But in our case VM didn't boot and went to the grub> console.

10) To resolve this I rebooted the VM and again performed the above 1-7 steps.
After that performed an upgrade of the installed packages.
a) apt update
b) apt upgrade
But the apt upgrade command failed, with the below error:

Could not find /boot/grub/menu.lst file.
Would you like /boot/grub/menu.lst generated for you? (y/N)
/usr/sbin/update-grub-legacy-ec2: line 1101: read: read error: 0: Bad file descriptor

11) To resolve the error, I created the /boot/grub/menu.lst file manually.
touch /boot/grub/menu.lst

12) After that again I run the apt upgrade command.
Now the apt command showed me the below question for the/boot/grub/menu.lst file.
From the available option, select the first one, "install the package maintainer's version"









13) This time apt upgrade command was successful. 
After that, we can exit from the chrooted environment.
Use the exit command to exit from the chrooted environment.

14) Shutdown the recovery environment
shutdown -h now 

15) Start the VM after selecting the Boot from Hard Drive option from DigitalOcean's Recovery Link. 

This time our recovery was successful and the VM started without any issue. 


Thursday, 31 March 2022

Kubernetes Pod timezone not consistent with the host: Configure timezone for a Pod

Container(s) in a pod do not inherit time zones from host worker machines on which they are running. Usually, the default timezone for most of the container images is UTC. So this may lead to inconsistencies in time management within the cluster. We may need to change the timezone of a container so the time discrepancies can be avoided. 


For example, we are running an Nginx pod in our K8s cluster using the official Nginx image. 

Let’s create a deployment configuration file for the Nginx pod nginx-timezone.yaml is as below:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80



Now we are creating the deployment:


# kubectl apply -f nginx-timezone.yaml


Our deployment is created, so the nginx pod.







Now if we check the timezone in the host and the Pod’s container, we can see that the host has IST timezone and the Nginx container has UTC timezone.








I am going to write about two methods using which we can change the container’s timezone so that our host’s & container’s time are in sync.



Method 1:


The first method is to use the TZ environment variable.

We will update the deployment configuration file to add the TZ environment variable, in our case we are going to set the timezone of the container to Asia/Kolkata. The modified nginx-timezone.yaml is shown below:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80

        env:

        - name: TZ

            value: "Asia/Kolkata"




Now we will apply the new configuration, and as the new pod is created, terminating the previous one, we can see in the below image that the new container’s timezone is now showing as IST which is our desirable timezone.













Method 2:

Sometimes the TZ environment variable method does not work. In those cases, we can use the hostPath volume mount method to change the container’s timezone. 


 

Linux systems look at the /etc/localtime file to determine the timezone of the machine. The /etc/localtime is symliked to one of the zoneinfo files located in the /usr/share/zoneinfo directory. So if we are located in India, we will be under the Asia - Kolkata zone. We set our timezone to Asia/Kolkata, then /etc/localtime will be symlinked to /usr/share/zoneinfo/Asia/Kolkata file.



We are going to use the host worker machine’s /usr/share/zoneinfo directory’s specific zone file to mount as the /etc/localtime file in the container. e.g. we need to set the timezone of the container as Asia/Kolkata, so we will mount the host machine’s /usr/share/zoneinfo/Asia/Kolkata file into the container’s /etc/localtime file. 

This will make the container to use the timezone of the zone file that we mounted as hostPath volume. 


Note: A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.


So our updated Nginx deployment configuration file will be:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80

        volumeMounts:

          - name: zoneconfig

            mountPath: /etc/localtime

            readOnly: true

      volumes:

      - name: zoneconfig

        hostPath:

           path: /usr/share/zoneinfo/Asia/Kolkata



Now applying the changes:








We can see that the timezone of our container changed to IST (Asia/Kolkata) ✌✌






Thursday, 20 January 2022

Bootstrapping your own Kubernetes clusters for testing and development

 In this document, I am going to show you the simplest & quickest way to ready your own Kubernetes cluster that you can use for testing, learning and development purposes. It is not recommended for production scenarios. 

I will use one master node and two worker nodes for this demonstration. I am using Virtualbox VMs with all the nodes running Ubuntu 20.04 and the scripts that I am going to use are also for Ubuntu only. 


Prerequisites:

  • Minimum RAM per node should be 2 GB

  • 2 CPU cores per node

  • Swap off on all the nodes

    • Run swapoff command on each node:
      $ sudo swapoff -a 

    • Disable any swap entry in /etc/fstab file


Recommendation:

  • The nodes should probably be in the same local subnet, they should be able to communicate with each other without any firewall.

  • If you are using VMs in some cloud provider, ensure that the VMs are in the same VCN and subnet. You can configure the security list/cloud firewall so that the VMs can interact with each other for all the ports needed in a Kubernetes cluster.


Initial Setup:

Suppose my VMs are named this way:


Node

IP

master

192.168.0.51

worker1

192.168.0.52

worker2

192.168.0.53


You can add the entries in all the VMs hosts file, so that they can communicate with each other by hostnames. So edit /etc/hosts file on each VM and add the following lines:

192.168.0.51 master

192.168.0.52 worker1

192.168.0.53 worker2


Now we are ready to start the installation



Master Node:

On the master node run the scripts step by step in the same order it is shown below:

Step 1:

 
Install container runtime containerd using the script:
https://github.com/pranabsharma/scripts/blob/master/kubernetes/installation/install_containerd.sh 

Download the script and run it
ubuntu@master:~$ ./install_containerd.sh


Step 2:

Install the kubectl, kubeadm and kubelet using the script:

https://github.com/pranabsharma/scripts/blob/master/kubernetes/installation/install_kubeTools.sh

Download the script and run it

ubuntu@master:~$ ./install_kubeTools.sh



Step 3: 

Download the below script and ONLY run on your master node:

https://github.com/pranabsharma/scripts/blob/master/kubernetes/installation/run_on_master.sh 


Download the script and run it

ubuntu@master:~$ ./run_on_master.sh


This script does the following tasks:

  • Run kubeadm to initialize a Kubernetes control-plane on the master node.

  • Deploy Wavenet CNI plugin to manage the kubernetes pod networking. 

  • Copy the kubeconfig file to the user's home directory location so that kubectl commands can be run without specifying the kubeconfig file.



Our master node and control-plane are ready. At this point we will get the following status of our cluster:


ubuntu@master:~$ kubectl get node

NAME     STATUS   ROLES                  AGE   VERSION

master   Ready    control-plane,master   50m   v1.23.2



ubuntu@master:~$ kubectl get pod -n kube-system

NAME                                             READY       STATUS    RESTARTS      AGE

coredns-64897985d-fvnhj              1/1             Running           0                     51m

coredns-64897985d-wq6z5           1/1             Running           0                     51m

etcd-master                                   1/1             Running           0                     51m

kube-apiserver-master                  1/1             Running           0                     51m

kube-controller-manager-master   1/1             Running           0                     51m

kube-proxy-hnk2z                         1/1             Running           0                     51m

kube-scheduler-master                 1/1             Running           0                     51m

weave-net-gjvqq                           2/2             Running           1 (50m ago)    51m





Worker Node


Installation steps on worker nodes are the same as the master, the only difference is that we are going to skip the Step3 of the master node (step3 is for setting up the control plane). Run the scripts as shown in Step1 and Step2:

Step 1:

 
Install container runtime containerd using the script:
https://github.com/pranabsharma/scripts/blob/master/kubernetes/installation/install_containerd.sh 

Download the script and run it
ubuntu@worker1:~$ ./install_containerd.sh


Step 2:

Install the kubectl, kubeadm and kubelet using the script:

https://github.com/pranabsharma/scripts/blob/master/kubernetes/installation/install_kubeTools.sh

Download the script and run it

ubuntu@worker1:~$ ./install_kubeTools.sh



Adding Worker Nodes to the cluster


At this point our required software and services for the Kubernetes cluster are ready. The final step is to add the worker nodes to the cluster. 


Step1: 


We are going to create a new token for the worker node to join the cluster.


Run the below command on master node:


ubuntu@master:~$ kubeadm token create --print-join-command


This command will output the command to join the cluster. The output will be something like this:

kubeadm join 192.168.0.51:6443 --token pk9v0f.o8valhztkblohsmu --discovery-token-ca-cert-hash sha256:9e046d3f15e49c7363ec7a762767b169a296d6af7150aad56d21d54399a2df6f


Copy the output, we will need it in the next step.


Step 2:


Run the copied output command on the worker nodes


ubuntu@worker1:~$ kubeadm join 192.168.0.51:6443 --token pk9v0f.o8valhztkblohsmu --discovery-token-ca-cert-hash sha256:9e046d3f15e49c7363ec7a762767b169a296d6af7150aad56d21d54399a2df6f



Immediately after running the above command on worker node, if we check the nodes in the cluster we may get the below output:


ubuntu@master:~$ kubectl get node

NAME      STATUS     ROLES                  AGE   VERSION

master        Ready      control-plane,master   54m       v1.23.2

worker1       NotReady       <none>                 39s       v1.23.2


After some time, the worker node will come into the ready state.


In the same way we can add the worker2 node also.


That’s it, and our kubernetes cluster is ready to rock!!! Super easy isn’t it?