CopyDisable

Friday, 27 September 2024

Maintaining your docker registry

 In this post, I will be discussing how to maintain our private Docker registry.

1. Check Available Repositories in the Registry

We can use the Docker Registry HTTP API to get a list of repositories.

Using the Docker Registry API:

  • Step 1: First, you need to ensure your registry is accessible over the network.

  • Step 2: Use curl or any HTTP client to make an API request to list all repositories:       

         curl -u username:password -X GET https://your-domain.com/v2/_catalog
  • Replace username and password with your registry credentials.
  • Replace your-domain.com with your Docker registry domain or IP.

This will return a JSON object containing a list of repositories in your Docker registry:







2. Check Available Tags in a Repository

Once we know the repository names, we can list the tags for a specific repository:

  • Step 1: Use the following API call to get the tags of a specific repository:

    curl -u username:password -X GET https://your-domain.com/v2/repository-name/tags/list





 

3. Delete a Repository or Specific Tag

The Docker Registry API does not support removing an entire repository directly, but we can delete specific image tags, which effectively removes the image from the registry.

Using the Registry API to Delete Manifests (Tags):

  • Step 1: Find the digest of the image tag you want to delete:

    curl -u username:password -I -H "Accept: application/vnd.docker.distribution.manifest.v2+json" https://your-domain.com/v2/repository-name/manifests/tag-name

    This will return headers, including the image's digest in the docker-content-digest header:




  • Step 2: Use the digest to delete the image:

    curl -u username:password -X DELETE https://your-domain.com/v2/repository-name/manifests/sha256:abc123def456...


    You might get the error ("The operation is unsupported."). This occurs because by default, the Docker Registry does not support deletion of manifests. To enable deleting images or tags in the Docker registry, you need to modify the registry configuration to allow deletions.

    If you're using Docker Compose, add the environment variable in your docker compose file REGISTRY_STORAGE_DELETE_ENABLED: true



    After modifying the configuration, we need to restart the Docker registry to apply the changes:


    Now your delete will be successful.


    This deletes the specific image tag. After deleting tags, the data might still remain in the registry until a garbage collection is run.


4. Garbage Collection to Fully Remove Deleted Repositories/Tags

After deleting manifests or tags, run garbage collection to clean up the registry:

  • Step 1: Stop your Docker Registry (you need to ensure no push or pull operations happen while the GC is running):

    $ docker-compose down

  • Step 2: Run Garbage Collection inside the registry container:

    $ docker run --rm -v /path/to/registry/data:/var/lib/registry registry:2 bin/registry garbage-collect /etc/docker/registry/config.yml


    Replace /path/to/registry/data with the path to your registry data, usually mounted in the container.

  • Step 3: Restart the registry:

    $ docker-compose up -d













Thursday, 5 September 2024

Docker registry, repository, and versioning: Know the difference between registry and repository

 

Docker Registry:

Think of the Docker registry as the entire online library where all the books (Docker images) are stored. A registry is a centralized place that hosts multiple repositories of Docker images. It allows users to upload (push) or download (pull) Docker images.

  • Example: DockerHub, Amazon ECR, or a private Docker registry you set up are examples of Docker registries. They are like the library building that holds many collections of books.

 

Docker Repository:

A Docker repository is like a specific shelf or section in that library, where all the different versions of a particular book are stored. A repository holds a set of related Docker images, usually different versions of the same application.

  • Example: If you have an application called "my-app", the repository would be called my-app. Inside that repository, you might have several versions (images), such as my-app:1.0, my-app:2.0, etc. These different versions are like different editions of the same book on that shelf.

 

In simple terms:

  • Docker Registry = Big library holding many collections (repositories) of images.
  • Docker Repository = A shelf inside the library, containing different versions of a specific application (images).

 

Scenario:

Consider the scenario, where we have a Docker registry set up at myregistry.example.com using a Docker registry container, and we want to store 25 applications (myapp1 to myapp25).

Let’s see how the concept of registry, repository, and versions will work in this scenario:

1. Registry:

The registry is myregistry.example.com. It will act as the central server to host all our Docker images (for all 25 applications). This is where we’ll push (upload) and pull (download) images.

 

2. Repositories for Each Application:

Each application (from myapp1 to myapp25) will have its own repository inside the registry. Think of each repository as a separate directory or space within the registry dedicated to one application.

  • Example:
    • myapp1 will have its own repository at myregistry.example.com/myapp1.
    • myapp2 will have its own repository at myregistry.example.com/myapp2.
    • Similarly, all other applications up to myapp25 will have their own repositories.

 

3. Versioning:

We can maintain different versions (tags) of each application by tagging the Docker images. Docker uses tags to differentiate between versions of the same image in a repository.

For each application repository, we’ll push different versions like this:

  • For myapp1, you might have versions 1.0, 2.0, 3.0, etc.
    • Image names: myregistry.example.com/myapp1:1.0, myregistry.example.com/myapp1:2.0, etc.
  • For myapp2, you’ll have similar versioning:
    • Image names: myregistry.example.com/myapp2:1.0, myregistry.example.com/myapp2:2.0, etc.

 

4. Push and Pull Workflow:

Here’s how push and pull will work for each application:

Pushing an Image:

To push an image to your registry, you first need to tag it with the correct repository and version (tag) name, then push it to the registry.

For example:

  1. You build a Docker image for myapp1 locally:
docker build -t myapp1 .
  1. You tag the image with the registry URL, repository, and version:
docker tag myapp1 myregistry.example.com/myapp1:1.0
  1. You push the image to the registry:
docker push myregistry.example.com/myapp1:1.0

We can repeat this process for different versions of myapp1 or for other applications (myapp2, myapp3, etc.).

 

Pulling an Image:

To pull an image from the registry, we’ll specify the registry URL, repository, and the version we want.

For example:

1.     To pull version 1.0 of myapp1 from the registry:

docker pull myregistry.example.com/myapp1:1.0

2.     To pull version 2.0 of myapp2:

docker pull myregistry.example.com/myapp2:2.0

 

 

Example Push/Pull Flow for Multiple Applications:

1.     For myapp1, version 1.0:

docker build -t myapp1 .
docker tag myapp1 myregistry.example.com/myapp1:1.0
docker push myregistry.example.com/myapp1:1.0

2.     For myapp2, version 2.0:

docker build -t myapp2 .
docker tag myapp2 myregistry.example.com/myapp2:2.0
docker push myregistry.example.com/myapp2:2.0

3.     To pull a specific version of an app:

docker pull myregistry.example.com/myapp1:1.0
docker pull myregistry.example.com/myapp2:2.0

 

 

Maintaining latest version:

In Docker, the latest version of an image is usually represented by the latest tag. This is not automatically assigned—we have to manually tag and push the image with latest if we want it to be the default version pulled when no specific version is provided.

How to Maintain the Latest Version:

1.     Tagging the Latest Version: Each time we release a new version of an application that we want to be the default (or latest), we will need to tag it with latest in addition to its specific version tag.

2.     Pulling the Latest Version: If we don't specify a version when pulling, Docker will automatically pull the image tagged as latest.

 

Example of Maintaining the Latest Version:

Let's take the example of myapp1.

Step 1: Build Your Application Image

We have a new version of myapp1 (version 3.0) and want this to be the latest.

docker build -t myapp1 .

 

Step 2: Tag with the Version and latest

Now, tag the image with both the specific version (3.0) and latest:

# Tag with version 3.0
docker tag myapp1 myregistry.example.com/myapp1:3.0
 
# Tag the same image as 'latest'
docker tag myapp1 myregistry.example.com/myapp1:latest
 
 
 

Step 3: Push Both Tags

Push both the versioned and the latest tags to our registry:

# Push version 3.0
docker push myregistry.example.com/myapp1:3.0
 
# Push the latest tag
docker push myregistry.example.com/myapp1:latest
 
 
 

Step 4: Pulling the Latest Version

When someone pulls myapp1 without specifying a version, Docker will pull the image tagged as latest:

# This will pull 'myapp1:latest'
docker pull myregistry.example.com/myapp1

If someone wants to pull a specific version, they can still do so by specifying the version tag:

# This will pull 'myapp1:3.0'
docker pull myregistry.example.com/myapp1:3.0

 

 

Keeping the latest Tag Up-to-Date:

  • Whenever we push a new version that you want to be the default, tag it as latest along with the specific version number.
  • The latest tag will always point to the most recent image that you tagged as latest.

 

Example Push Flow for Keeping latest Up-to-Date:

Let’s say we’ve already pushed versions 1.0 and 2.0, and now we’re pushing version 3.0:

1.     Build the new version:

docker build -t myapp1 .

2.     Tag it as both version 3.0 and latest:

docker tag myapp1 myregistry.example.com/myapp1:3.0
docker tag myapp1 myregistry.example.com/myapp1:latest

3.     Push both tags to the registry:

docker push myregistry.example.com/myapp1:3.0
docker push myregistry.example.com/myapp1:latest

Saturday, 1 June 2024

Revoking a JWT (JSON Web Token) before its expiry

JWTs are designed to be stateless and are valid until they expire, which can pose a security risk if they are leaked or stolen. Revoking a JWT (JSON Web Token) before its expiry can be necessary in several scenarios, mostly revolving around security concerns or changes in user status.

Here are some common examples where revoking a JWT is crucial:

1) User logout: When a user logs out, it's a security best practice to ensure that the JWT the user was using is immediately invalidated to prevent further use. This helps enforce that logout effectively ends access, rather than allowing the token to remain valid until it naturally expires.

2) Change of user permission or role: If a user's roles or permissions are changed, you might need to revoke any existing tokens. This ensures that any new requests from the user adhere to their updated permissions, preventing access based on outdated privileges.

3) Security Breaches: If you detect that a user's credentials have been compromised, revoking their active JWTs can help mitigate unauthorized access. This is particularly important if you suspect that tokens have been stolen or exposed to third parties.

4) Suspension or Deletion of Accounts: If a user's account is suspended or deleted, all associated JWTs should be revoked to prevent any further activity.

5) Password Changes: Following a password change, particularly if the change was prompted by security concerns (like a potential breach), it's sensible to revoke any existing tokens. This prevents the old tokens from being used by anyone who might have gained unauthorized access before the password was updated.

6) Anomalies in User Behavior: If abnormal activity is detected in a user’s account, such as logging in from an unusual location or multiple failed attempts to access restricted resources, it might be wise to revoke their JWTs until the activity can be reviewed. This could prevent ongoing or escalating security issues.


In each of these above example scenarios, revoking a JWT is about ensuring that the system's current state aligns with the security and operational policies of the application. Revoking a JWT (JSON Web Token) before its expiry can be crucial for maintaining the security of the application, especially in cases where a token might be compromised.

Here are some strategies that we can employ to effectively manage and revoke JWTs before their expiration:

1. Use a Token Revocation List: A common method is to maintain a token revocation list on your server. Whenever you need to revoke a token, you add its unique identifier to this list. Each time a token is presented to the server, you check if it's on this revocation list. If it is, you treat the token as invalid, even if it's not expired. 2. Have Short Expiry Times: Another approach is to use very short expiry times for your tokens and require frequent re-authentication or token refreshing. This limits the window in which a stolen or leaked token can be used. During the refresh process, you can perform additional checks and refuse to issue a new token if the user's credentials have been revoked. 3. Implement a Blacklist with Cache: Similar to using a revocation list, you can implement a blacklist service, potentially using a fast, in-memory data store like Redis. This approach can be particularly effective in environments with high scalability requirements, where checking a database might introduce too much latency. 4. Change the Secret Key: In some extreme cases, such as a breach where multiple tokens are compromised, you can invalidate all issued tokens by changing the secret key used to sign the JWTs. This approach requires issuing new tokens for all active users, which can be disruptive but is highly effective in mitigating damage from a wide-scale token compromise. 5. Use Stateful Tokens: If feasible, consider using stateful JWTs. This involves storing token metadata in a database or another storage mechanism. When a token needs to be revoked, you can simply mark it as invalid in the storage system. This approach combines the benefits of token-based authentication with the revocability of session-based authentication.


Regularly rotate your secrets and update your token validation logic to keep up with potential vulnerabilities.

Monday, 13 May 2024

MySQL 8 silent installation on Windows 11

In this post I am going to show you how to install MySQL 8.* version on a windows 11 machine. For this post I am going to use MySQL 8.4 version.

Open windows Command Prompt with administrative privileges. 

Step 1: To install MySQL 8.4 version, we need to have visual studio 2019 x64 redistributable installed in our Windows 11 machine. So first we will install this prerequisite: 

Download visual studio 2019 x64 redistributable from the URL:

https://download.visualstudio.microsoft.com/download/pr/c7707d68-d6ce-4479-973e-e2a3dc4341fe/1AD7988C17663CC742B01BEF1A6DF2ED1741173009579AD50A94434E54F56073/VC_redist.x64.exe

Install visual studio 2019 x64 redistributable silently from the command prompt:

VC_redist.x64.exe /q /norestart


Step 2: Create MySQL data directory, in this directory MySQL database going to reside:

mkdir C:\ProgramData\MySQL\Data


Step 3: Create a directory to store MySQL config file:

mkdir C:\ProgramData\MySQL\Config


Step 4: Create mysql.ini config file inside C:\ProgramData\MySQL\Config folder:


[client]

port=3306

[mysql]

no-beep

[mysqld]

port=3306

datadir=C:/ProgramData/MySQL/Data

default-storage-engine=INNODB

lower_case_table_names=1


Step 5: Install MySQL silently 

mysql-8.4.0-winx64.msi /qn INSTALLDIR="C:\Program Files\MySQL"


Step 6: Create MySQL Windows Service:

"C:\Program Files\MySQL\bin\mysqld" --install MySQL --defaults-file=C:\ProgramData\MySQL\Config\mysql.ini


Step 7: Initialize MYSQL

"C:\Program Files\MySQL\bin\mysqld" --defaults-file=C:\ProgramData\MySQL\Config\mysql.ini  --initialize-insecure


Step 8: Add MySQL Path to environment variable:

setx /M PATH "%PATH%;C:\Program Files\MySQL\bin"


Step 9: Start MySQL service:

net start MySQL


We can add the above commands in a script and run that script as admin user to make this installation completely silent. 

Wednesday, 3 August 2022

Recovering DigitalOcean droplet landing on grub-rescue prompt

 One night we rebooted one of our DigitalOcean Ubuntu 18.04 droplet (VM) and after starting, the VM was giving an error and directly going to the grub rescue prompt.

The error was displayed as 

error: file /boot/grub/i386-pc/normal.mod not found

grub rescue> 

We used the ls command and it will show all the disk devices or partitions which were connected to our VM.

grub rescue> ls

(hd0) (hd0,gpt15) (hd0,gpt14) (hd0,gpt1) (hd1) (hd2)


grub rescue> ls (hd0,gpt1)/


We can see from the above output that our /boot directory is missing. 

As the /boot folder is missing or probably got deleted by mistake, we tried to restore the droplet backup also. But unluckily in available droplet backups also the /boot folder was missing and the droplet won't start.

So we decided to go with DigitalOcean recovery option. We stopped our VM and took a snapshot and proceeded to boot from Recovery ISO.

1) Go to the Recovery link in DigitalOcean console, after the VM is shut down. Select the Boot from Recovery ISO option. 











Turn the VM on and go to the recovery console.

















Click on Launch Recovery Console.

2) Once you are in the recovery console, choose option 1. Mount Your Disk Image. This will mount our droplet's root volume.

3) Then choose option 6 to go to the Interactive Shell.

4) In the interactive shell, execute the below commands:

  a.   mount -o bind /dev /mnt/dev
b. mount -o bind /dev/pts /mnt/dev/pts
 c. mount -o bind /proc /mnt/proc
d. mount -o bind /sys /mnt/sys
e. mount -o bind /run /mnt/run

5) Change root for your mounted disk and go to droplet’s root directory.

chroot /mnt

6) Create the GRUB config file using the command: /usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg

7) Our droplet’s disk is /dev/vda and we will install GRUB on this disk /usr/sbin/grub-install /dev/vda

8) At this point, we can exit from the chrooted environment.
exit

9) Shutdown the VM and turn the VM on from the VM's hard drive. But in our case VM didn't boot and went to the grub> console.

10) To resolve this I rebooted the VM and again performed the above 1-7 steps.
After that performed an upgrade of the installed packages.
a) apt update
b) apt upgrade
But the apt upgrade command failed, with the below error:

Could not find /boot/grub/menu.lst file.
Would you like /boot/grub/menu.lst generated for you? (y/N)
/usr/sbin/update-grub-legacy-ec2: line 1101: read: read error: 0: Bad file descriptor

11) To resolve the error, I created the /boot/grub/menu.lst file manually.
touch /boot/grub/menu.lst

12) After that again I run the apt upgrade command.
Now the apt command showed me the below question for the/boot/grub/menu.lst file.
From the available option, select the first one, "install the package maintainer's version"









13) This time apt upgrade command was successful. 
After that, we can exit from the chrooted environment.
Use the exit command to exit from the chrooted environment.

14) Shutdown the recovery environment
shutdown -h now 

15) Start the VM after selecting the Boot from Hard Drive option from DigitalOcean's Recovery Link. 

This time our recovery was successful and the VM started without any issue. 


Thursday, 31 March 2022

Kubernetes Pod timezone not consistent with the host: Configure timezone for a Pod

Container(s) in a pod do not inherit time zones from host worker machines on which they are running. Usually, the default timezone for most of the container images is UTC. So this may lead to inconsistencies in time management within the cluster. We may need to change the timezone of a container so the time discrepancies can be avoided. 


For example, we are running an Nginx pod in our K8s cluster using the official Nginx image. 

Let’s create a deployment configuration file for the Nginx pod nginx-timezone.yaml is as below:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80



Now we are creating the deployment:


# kubectl apply -f nginx-timezone.yaml


Our deployment is created, so the nginx pod.







Now if we check the timezone in the host and the Pod’s container, we can see that the host has IST timezone and the Nginx container has UTC timezone.








I am going to write about two methods using which we can change the container’s timezone so that our host’s & container’s time are in sync.



Method 1:


The first method is to use the TZ environment variable.

We will update the deployment configuration file to add the TZ environment variable, in our case we are going to set the timezone of the container to Asia/Kolkata. The modified nginx-timezone.yaml is shown below:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80

        env:

        - name: TZ

            value: "Asia/Kolkata"




Now we will apply the new configuration, and as the new pod is created, terminating the previous one, we can see in the below image that the new container’s timezone is now showing as IST which is our desirable timezone.













Method 2:

Sometimes the TZ environment variable method does not work. In those cases, we can use the hostPath volume mount method to change the container’s timezone. 


 

Linux systems look at the /etc/localtime file to determine the timezone of the machine. The /etc/localtime is symliked to one of the zoneinfo files located in the /usr/share/zoneinfo directory. So if we are located in India, we will be under the Asia - Kolkata zone. We set our timezone to Asia/Kolkata, then /etc/localtime will be symlinked to /usr/share/zoneinfo/Asia/Kolkata file.



We are going to use the host worker machine’s /usr/share/zoneinfo directory’s specific zone file to mount as the /etc/localtime file in the container. e.g. we need to set the timezone of the container as Asia/Kolkata, so we will mount the host machine’s /usr/share/zoneinfo/Asia/Kolkata file into the container’s /etc/localtime file. 

This will make the container to use the timezone of the zone file that we mounted as hostPath volume. 


Note: A hostPath volume mounts a file or directory from the host node's filesystem into your Pod.


So our updated Nginx deployment configuration file will be:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-timezone

  labels:

    app: nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: my-nginx

        image: nginx

        ports:

        - containerPort: 80

        volumeMounts:

          - name: zoneconfig

            mountPath: /etc/localtime

            readOnly: true

      volumes:

      - name: zoneconfig

        hostPath:

           path: /usr/share/zoneinfo/Asia/Kolkata



Now applying the changes:








We can see that the timezone of our container changed to IST (Asia/Kolkata) ✌✌