CopyDisable

Saturday, 5 August 2017

Script to create a test mongodb sharded cluster

Many times for some testing we need to start a MongoDB sharded cluster. It is very tedious to manually starting all the config servers, shards and mongos. I wrote a shell script which creates the test sharded MongoSB cluster in a single machine. The script is available in my github account https://github.com/pranabsharma/scripts/blob/master/mongodb/create_shard.sh

Monday, 24 July 2017

Kubernetes: Flannel Docker IP issue

I have a test kubernetes cluster with 1 master and 2 nodes, running on coreos. I have Kubernetes DNS service running in the cluster and also some test pods in the cluster. I faced a strange problem, in some containers DNS was getting resolved and in some containers it was not. Checking the containers where DNS was not getting resolved, I found that these containers unable to connect the DNS pod. While checking the IPs of the pods using kubectl command, I found that even two pods are having the same IPs but they were running on different nodes, which is quite not possible in a Kubernetes cluster. So definitely there was network misconfiguration in the cluster. The pods were getting IPs which are not from the flannel service IP range, but IPs were from local docker IP range 172.17.0.0. So clearly docker is not picking up the IP range from flannel service.
Investigating the issue, I found that etcd master was not in proper start sequence and was listening to the localhost interface not on the Ethernet interface. Because of that the client etcd services on each node were unable to connect to the master etcd. As a result flannel service also error out and not started on each node.
When flannel service starts properly it creates a file /run/flannel/flannel_docker_opts.env. This file contains the host system’s docker0 network interface IP (bridge IP)
DOCKER_OPT_BIP="--bip=10.244.10.1/24"
When docker starts it reads the file
EnvironmentFile=-/run/flannel/flannel_docker_opts.env
and loads the environment variables of the file /run/flannel/flannel_docker_opts.env and configures itself accordingly.
When flannel service does not start properly, the environment variables required to start docker are not added to the file /run/flannel/flannel_docker_opts.env. Because of that the docker service was getting with the default bip 172.17.0.1.
To resolve the issue, changes were made on the master node, so that etcd master starts properly. In my cloud config file /var/lib/coreos-install/user_data I added one unit to restart the etcd service once my static network interface is configured.
- name: etcd2.service
command: restart
Now after booting, etcd was properly listening on my static IP.
Also I have edited docker service to start after the flannel service
systemctl edit docker.service

After=containerd.service docker.socket network-online.target flanneld.service flannel-docker-opts.service

Requires=containerd.service docker.socket flanneld.service

Next on each node of kubernetes cluster, I have changed the cloud config file /var/lib/coreos-install/user_data.
First I have added a script to check whether a port is open, if the port is not open then check again after 1 second. I will use this script to check whether the etcd master service is up.
write-files:
- path: /opt/bin/checkport
permissions: '0755'
content: |
#!/bin/bash
# This script waits till the port is accessible
[ -n "$1" ] && [ -n "$2" ] && while ! curl -s http://${1}:${2} > /dev/null; \
do sleep 1 && echo -n .; done;
exit $?
- name: etcd2.service
command: restart
drop-ins:
- name: 30-wait-for-server.conf
content: |
[Service]
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/checkport 192.168.10.75 2380
The above part is checking whether our etcd master (192.168.10.75) is up and listening to port 2380. If master etcd service is not up, then it waits for the master service.
Again edited docker service in each node to start after the flannel service
systemctl edit docker.service

After=containerd.service docker.socket network-online.target flanneld.service flannel-docker-opts.service

Requires=containerd.service docker.socket flanneld.service

After restarting everything, docker service starts picking up the bridge IP from flannel service.
clip_image002
Also the pods are getting the correct IP from the flannel IP range
clip_image004
Checklist:
1. Use ifconfig and check if flannel and docker interface IPs are in sync.
2. Check the IP subnet ranges in etcd and whether each flannel node using the correct subnet
curl http://127.0.0.1:2379/v2/keys/coreos.com/network/subnets.
3. Check the IPs of the pods.


Tuesday, 27 June 2017

MongoDB Recipes: Change mongod’s log level

To view the current log verbosity levels, use the db.getLogComponents() method.
In the below mongod instance, the verbosity is 0 (default informational level). Here we can see that the verbosity of individual components are -1, this means the component inherits the log level of its parent.
image
We can configure log verbosity levels by

  1. Method 1: Using mongod’s startup settings
  2. Method 2: Using the logComponentVerbosity parameter
  3. Method 3: Using the db.setLogLevel() method.


Method 1:

We can configure global verbosity level using: the systemLog.verbosity settings
image
Also we can change verbosity level of an individual log component using the systemLog.component.<name>.verbosity setting for that component.

For example we are changing the verbosity level of network component to 0.
systemLog.component.network.verbosity
image

image

Method 2:

To change log verbosity level using the logComponentVerbosity parameter, pass a document with the required verbosity settings.
Example, we are going to set the default verbosity level to 2, verbosity level of storage to 1, and the storage.journal to 0
> use admin
> db.runCommand( { setParameter: 1, logComponentVerbosity:
{
verbosity: 2,
storage: {
verbosity: 1,
journal:

{
verbosity: 0
}
}

}
} )


image

Method 3:

We can use the db.setLogLevel() method to update a single component’s log level.
Example:
  • Change the default verbosity level to 0:
    image
  • Change the storage.journal log level to 4:
    image

Wednesday, 7 June 2017

MongoDB Receipes: Change chunk size in a shared cluster

The default chunk size in a shared cluster is 64MB. If we want to change it to a smaller/larger size (the allowed range of the chunk size is between 1 and 1024 MB):

Method 1:

This method works before/after the shared cluster is initialized and is recommended method
  • Login to any mongos of the shared cluster:
    image
  • To change the global chunk size, update the value field of chunksize in the config database, here we are changing the chunk size to 5MB:
    image

Method 2:

This method works only when you initialize the cluster for the first time. After the cluster is initialized, this method will not change the chunk size of the cluster.
Start the mongos with sharding.chunkSize (using config file) or --chunkSize (command line option) and set this option to desired chunk size value.

Wednesday, 24 May 2017

How to change resource limit of a running docker container dynamically

Say we are running a docker container constraining it’s resource usage. After some time we discovered that the container needs more resource and we have to relax the resource limit for that container without stopping that container. 
I am writing two methods for changing the resource limit of a container.
For example we are running a container with a memory limit of 128MB
docker run -it -m 128m ubuntu /bin/bash
You can find all the information about the memory under /sys/fs/cgroup/memory/docker/<full container id>
We can get the full Container ID by running the docker ps command with --no-trunc option.
docker ps --no-trunc
image
Memory limit for a container can be found from the file
/sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes
#cat /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
We can check the memory utilization by the container using the docker stats command:
# docker stats b2bebfd78782
image
Let’s run stress tool in the container and check the utilization:
# stress --vm 1 --vm-bytes 512M
image
Checking the resource utilization again:
image
Although we specified 512MB in the stress command but as the container has a limit of 128MB RAM, so stress command is unable to get 512MB RAM and currently occupying full 128MB RAM.
Let’s increase the RAM to 1GB:
Method 1:
We can directly change the value of /sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes to number of bytes, and this will change the memory limit to the value we want.
echo 1073741824 > /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
Again we will check the memory utilization:
image
Yes, we can see that the memory limit has been increased to 1G
This change is temporary and once the container is restarted, it takes whatever memory setting was specified while container was created.
 
Method 2:
Another simple way is to change the resource limit is to use the docker update command. For example say we want to change the memory limit to 512MB:
# docker update b2bebfd78782 -m 512M
This will update the memory limit for the container permanently.
Usage: docker update CONTAINER [CONTAINER...]
Update configuration of one or more containers
--blkio-weight Block IO (relative weight), between 10 and 1000
-c, --cpu-shares CPU shares (relative weight)
--cpu-period Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems MEMs in which to allow execution (0-3, 0,1)
--help Print usage
--kernel-memory Kernel memory limit
-m, --memory Memory limit
--memory-reservation Memory soft limit
--memory-swap Swap limit equal to memory plus swap: '-1' to enable unlimited swap
--restart Restart policy to apply when a container exits


Friday, 19 May 2017

MongoDB Recipes: Disable replica set chaining

By default MongoDB allows replica set chaining. That means it allows a secondary member to sync from another secondary. Suppose we want our secondaries only to sync from the primary not from any other secondary. In that case we can disable replica set chaining.
  • Save replica set configuration in a variable:
    repl1:PRIMARY> cfg = rs.conf()
image
  • If settings sub-document is not present in the config, then add it:
    repl1:PRIMARY> cfg.settings = {}
  • Set the chainingAllowed property to false from default value true in the cfg variable
    repl1:PRIMARY> cfg.settings.chainingAllowed = false
  • Set the new configuration from cfg variable
    repl1:PRIMARY> rs.reconfig(cfg)
  • Check the new settings:
    repl1:PRIMARY> rs.conf()
image

Tuesday, 16 May 2017

Encrypting the shell scripts

Sometimes we need to encrypt a shell script for security reasons, for example if the script contains some sensitive information like password etc.
For this task I am going to use the shc tool (http://www.datsi.fi.upm.es/~frosal/sources/shc.html) to convert my text shell script file into a binary file . Download the source code of shc tool from the link http://www.datsi.fi.upm.es/~frosal/sources/ and extract the GZIP compressed tar archive file. Here I am going to use the 3.8.9 version.
Note: I used Ubuntu 14.04 for this example.
If make is not installed, then install make
# apt-get install make
Go inside the shc-3.8.9 source folder.
# cd shc-3.8.9
# make


clip_image001
Now install shc
#make install
clip_image003
If installation fails with directory not found error, create the /usr/local/man/man1 directory and run the command again.

#mkdir /usr/local/man/man1
# make install

clip_image004
Remove the shc source folder after it is installed
# cd ..
# rm -rf shc-3.8.9/

Our shc tool is installed, we are now going to convert our shell script into binary.
Go to the folder where the shell script is stored. My script name is mysql_backup.
Create binary file of the shell script using the following command:
# shc -f mysql_backup
shc command creates 2 additional files
# ls -l mysql_backup*
-rwxrw-r-- 1 pranab pranab 149 Mar 27 01:09 mysql_backup
-rwx-wx--x 1 pranab pranab 11752 Mar 27 01:12 mysql_backup.x
-rw-rw-r-- 1 pranab pranab 10174 Mar 27 01:12 mysql_backup.x.c
 
mysql_backup is the original unencrypted shell script.
mysql_backup.x is the encrypted shell script in binary format.
mysql_backup.c is the C source code of the mysql_backup file. This C source code is compiled to create the above encrypted mysql_backup.x file.
We will remove the original shell script (mysql_backup) and c file (mysql_backup.x.c) and rename the binary file (mysql_backup.x) into the shell script (mysql_backup).
# rm -f  mysql_backup.x.c
# rm -f mysql_backup
# mv mysql_backup.x mysql_backup
 
Now we have our binary shell script, the contents of this file can not be easily seen as it is a binary file.