CopyDisable

Wednesday 20 September 2017

Ubuntu upstart service for my golang web application

I have one web application written in go and need to deploy it as a service in Ubuntu server, say the name of the app is hello.
Copy the app to some directory in the server (e.g. /app directory, so the application binary is /app/hello)
Create an upstart script (e.g. hello.conf) and place it in /etc/init.
We run the binary using the following line:

exec start-stop-daemon --start \
--chuid $DAEMONUSER:$DAEMONGROUP \
--pidfile /var/run/hello.pid \
--make-pidfile \
--exec $DAEMON $DAEMON_OPTS


To send the stdout and stderr of the application to a log file (e.g. /logs/app/hello/hello.log), we can edit the start-stop-daemon command line:

exec start-stop-daemon --start \
--chuid $DAEMONUSER:$DAEMONGROUP \
--pidfile /var/run/hello.pid \
--make-pidfile \
--startas /bin/bash -- -c "exec $DAEMON $DAEMON_OPTS >> /logs/app/hello/hello.log 2>&1"

Now say we want to collect the summary of garbage collection and go scheduler trace in the log file. We have to change the Go’s runtime environment variable GODEBUG.
GODEBUG=gctrace=1 enables garbage collector (GC) trace. The garbage collector emits a single line to STDERR at each collection. The collector summarizes the amount of memory collected and the length of the garbage collection pause.
To investigate the operation of the runtime scheduler directly, and to get insights into dynamic behaviour of the goroutine scheduler, we can enable the scheduler trace. To enable the scheduler trace we can set:
GODEBUG=schedtrace=1000
The value 1000 is in milliseconds. So the above setting will make the scheduler to emit a single line to standard error every second.
We can combine both garbage collection and scheduler trace as GODEBUG=gctrace=1,schedtrace=30000


So again editing the start-stop-daemon command line:

exec start-stop-daemon --start \
--chuid $DAEMONUSER:$DAEMONGROUP \
--pidfile /var/run/hello.pid \
--make-pidfile \
--startas /bin/bash -- -c "exec /usr/bin/env GODEBUG=gctrace=1,schedtrace=30000 $DAEMON $DAEMON_OPTS >> /logs/app/hello/hello.log 2>&1"


A garbage collection log line looks like:
gc 56 @27.196s 0%: 0.010+3.7+0.014 ms clock, 0.010+0.80/2.6/0+0.014 ms cpu, 4->4->0 MB, 5 MB goal, 1 P
gc 57 @27.260s 0%: 0.007+2.1+0.010 ms clock, 0.007+0.35/1.0/0+0.010 ms cpu, 4->4->0 MB, 5 MB goal, 1 P

56: the GC number, incremented at each GC
@27.196s: time in seconds since program start
0%: percentage of time spent in GC since program start
0.010+3.7+0.014 ms clock: wall-clock times for the phases of the GC
0.007+0.35/1.0/0+0.010 ms cpu: CPU times for the phases of the GC
4->4->0 MB: heap size at GC start (4MB), at GC end (4MB), and live heap (0MB)
5 MB goal: goal heap size
1 P: number of processors used, here 1 processor used

A scheduler trace line looks like:
SCHED 25137ms: gomaxprocs=1 idleprocs=0 threads=4 spinningthreads=0 idlethreads=1 runqueue=2 [98]

25137ms : Time since program start
gomaxprocs=1: Gomaxprocs is the current value of GOMAXPROCS. The GOMAXPROCS variable limits the number of operating system threads that can execute user-level Go code simultaneously. Starting with Go 1.5, GOMAXPROCS is set to number of CPUs available by default.
idleprocs=0: Number of processors that are not busy. So here 0 processors are idle.
threads=4: Number of threads that the runtime is managing.
spinningthreads=0: Number of spinning threads.
idlethreads=1: Number of threads that are not busy. 1 thread is idle and (3 are running).
runqueue=2: Runqueue is the length of global queue with runnable goroutines.
[98]: Number of goroutines in the local run queue. For a machine with multiple processors we can see multiple values for each processor e.g. [2 2 2 3].
The init script is available here https://github.com/pranabsharma/scripts/blob/master/initScripts/Go/hello.conf

Saturday 5 August 2017

Script to create a test mongodb sharded cluster

Many times for some testing we need to start a MongoDB sharded cluster. It is very tedious to manually starting all the config servers, shards and mongos. I wrote a shell script which creates the test sharded MongoSB cluster in a single machine. The script is available in my github account https://github.com/pranabsharma/scripts/blob/master/mongodb/create_shard.sh

Monday 24 July 2017

Kubernetes: Flannel Docker IP issue

I have a test kubernetes cluster with 1 master and 2 nodes, running on coreos. I have Kubernetes DNS service running in the cluster and also some test pods in the cluster. I faced a strange problem, in some containers DNS was getting resolved and in some containers it was not. Checking the containers where DNS was not getting resolved, I found that these containers unable to connect the DNS pod. While checking the IPs of the pods using kubectl command, I found that even two pods are having the same IPs but they were running on different nodes, which is quite not possible in a Kubernetes cluster. So definitely there was network misconfiguration in the cluster. The pods were getting IPs which are not from the flannel service IP range, but IPs were from local docker IP range 172.17.0.0. So clearly docker is not picking up the IP range from flannel service.
Investigating the issue, I found that etcd master was not in proper start sequence and was listening to the localhost interface not on the Ethernet interface. Because of that the client etcd services on each node were unable to connect to the master etcd. As a result flannel service also error out and not started on each node.
When flannel service starts properly it creates a file /run/flannel/flannel_docker_opts.env. This file contains the host system’s docker0 network interface IP (bridge IP)
DOCKER_OPT_BIP="--bip=10.244.10.1/24"
When docker starts it reads the file
EnvironmentFile=-/run/flannel/flannel_docker_opts.env
and loads the environment variables of the file /run/flannel/flannel_docker_opts.env and configures itself accordingly.
When flannel service does not start properly, the environment variables required to start docker are not added to the file /run/flannel/flannel_docker_opts.env. Because of that the docker service was getting with the default bip 172.17.0.1.
To resolve the issue, changes were made on the master node, so that etcd master starts properly. In my cloud config file /var/lib/coreos-install/user_data I added one unit to restart the etcd service once my static network interface is configured.
- name: etcd2.service
command: restart
Now after booting, etcd was properly listening on my static IP.
Also I have edited docker service to start after the flannel service
systemctl edit docker.service

After=containerd.service docker.socket network-online.target flanneld.service flannel-docker-opts.service

Requires=containerd.service docker.socket flanneld.service

Next on each node of kubernetes cluster, I have changed the cloud config file /var/lib/coreos-install/user_data.
First I have added a script to check whether a port is open, if the port is not open then check again after 1 second. I will use this script to check whether the etcd master service is up.
write-files:
- path: /opt/bin/checkport
permissions: '0755'
content: |
#!/bin/bash
# This script waits till the port is accessible
[ -n "$1" ] && [ -n "$2" ] && while ! curl -s http://${1}:${2} > /dev/null; \
do sleep 1 && echo -n .; done;
exit $?
- name: etcd2.service
command: restart
drop-ins:
- name: 30-wait-for-server.conf
content: |
[Service]
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/checkport 192.168.10.75 2380
The above part is checking whether our etcd master (192.168.10.75) is up and listening to port 2380. If master etcd service is not up, then it waits for the master service.
Again edited docker service in each node to start after the flannel service
systemctl edit docker.service

After=containerd.service docker.socket network-online.target flanneld.service flannel-docker-opts.service

Requires=containerd.service docker.socket flanneld.service

After restarting everything, docker service starts picking up the bridge IP from flannel service.
clip_image002
Also the pods are getting the correct IP from the flannel IP range
clip_image004
Checklist:
1. Use ifconfig and check if flannel and docker interface IPs are in sync.
2. Check the IP subnet ranges in etcd and whether each flannel node using the correct subnet
curl http://127.0.0.1:2379/v2/keys/coreos.com/network/subnets.
3. Check the IPs of the pods.


Tuesday 27 June 2017

MongoDB Recipes: Change mongod’s log level

To view the current log verbosity levels, use the db.getLogComponents() method.
In the below mongod instance, the verbosity is 0 (default informational level). Here we can see that the verbosity of individual components are -1, this means the component inherits the log level of its parent.
image
We can configure log verbosity levels by

  1. Method 1: Using mongod’s startup settings
  2. Method 2: Using the logComponentVerbosity parameter
  3. Method 3: Using the db.setLogLevel() method.


Method 1:

We can configure global verbosity level using: the systemLog.verbosity settings
image
Also we can change verbosity level of an individual log component using the systemLog.component.<name>.verbosity setting for that component.

For example we are changing the verbosity level of network component to 0.
systemLog.component.network.verbosity
image

image

Method 2:

To change log verbosity level using the logComponentVerbosity parameter, pass a document with the required verbosity settings.
Example, we are going to set the default verbosity level to 2, verbosity level of storage to 1, and the storage.journal to 0
> use admin
> db.runCommand( { setParameter: 1, logComponentVerbosity:
{
verbosity: 2,
storage: {
verbosity: 1,
journal:

{
verbosity: 0
}
}

}
} )


image

Method 3:

We can use the db.setLogLevel() method to update a single component’s log level.
Example:
  • Change the default verbosity level to 0:
    image
  • Change the storage.journal log level to 4:
    image

Wednesday 7 June 2017

MongoDB Receipes: Change chunk size in a shared cluster

The default chunk size in a shared cluster is 64MB. If we want to change it to a smaller/larger size (the allowed range of the chunk size is between 1 and 1024 MB):

Method 1:

This method works before/after the shared cluster is initialized and is recommended method
  • Login to any mongos of the shared cluster:
    image
  • To change the global chunk size, update the value field of chunksize in the config database, here we are changing the chunk size to 5MB:
    image

Method 2:

This method works only when you initialize the cluster for the first time. After the cluster is initialized, this method will not change the chunk size of the cluster.
Start the mongos with sharding.chunkSize (using config file) or --chunkSize (command line option) and set this option to desired chunk size value.

Wednesday 24 May 2017

How to change resource limit of a running docker container dynamically

Say we are running a docker container constraining it’s resource usage. After some time we discovered that the container needs more resource and we have to relax the resource limit for that container without stopping that container. 
I am writing two methods for changing the resource limit of a container.
For example we are running a container with a memory limit of 128MB
docker run -it -m 128m ubuntu /bin/bash
You can find all the information about the memory under /sys/fs/cgroup/memory/docker/<full container id>
We can get the full Container ID by running the docker ps command with --no-trunc option.
docker ps --no-trunc
image
Memory limit for a container can be found from the file
/sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes
#cat /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
We can check the memory utilization by the container using the docker stats command:
# docker stats b2bebfd78782
image
Let’s run stress tool in the container and check the utilization:
# stress --vm 1 --vm-bytes 512M
image
Checking the resource utilization again:
image
Although we specified 512MB in the stress command but as the container has a limit of 128MB RAM, so stress command is unable to get 512MB RAM and currently occupying full 128MB RAM.
Let’s increase the RAM to 1GB:
Method 1:
We can directly change the value of /sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes to number of bytes, and this will change the memory limit to the value we want.
echo 1073741824 > /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
Again we will check the memory utilization:
image
Yes, we can see that the memory limit has been increased to 1G
This change is temporary and once the container is restarted, it takes whatever memory setting was specified while container was created.
 
Method 2:
Another simple way is to change the resource limit is to use the docker update command. For example say we want to change the memory limit to 512MB:
# docker update b2bebfd78782 -m 512M
This will update the memory limit for the container permanently.
Usage: docker update CONTAINER [CONTAINER...]
Update configuration of one or more containers
--blkio-weight Block IO (relative weight), between 10 and 1000
-c, --cpu-shares CPU shares (relative weight)
--cpu-period Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems MEMs in which to allow execution (0-3, 0,1)
--help Print usage
--kernel-memory Kernel memory limit
-m, --memory Memory limit
--memory-reservation Memory soft limit
--memory-swap Swap limit equal to memory plus swap: '-1' to enable unlimited swap
--restart Restart policy to apply when a container exits


Friday 19 May 2017

MongoDB Recipes: Disable replica set chaining

By default MongoDB allows replica set chaining. That means it allows a secondary member to sync from another secondary. Suppose we want our secondaries only to sync from the primary not from any other secondary. In that case we can disable replica set chaining.
  • Save replica set configuration in a variable:
    repl1:PRIMARY> cfg = rs.conf()
image
  • If settings sub-document is not present in the config, then add it:
    repl1:PRIMARY> cfg.settings = {}
  • Set the chainingAllowed property to false from default value true in the cfg variable
    repl1:PRIMARY> cfg.settings.chainingAllowed = false
  • Set the new configuration from cfg variable
    repl1:PRIMARY> rs.reconfig(cfg)
  • Check the new settings:
    repl1:PRIMARY> rs.conf()
image

Tuesday 16 May 2017

Encrypting the shell scripts

Sometimes we need to encrypt a shell script for security reasons, for example if the script contains some sensitive information like password etc.
For this task I am going to use the shc tool (http://www.datsi.fi.upm.es/~frosal/sources/shc.html) to convert my text shell script file into a binary file . Download the source code of shc tool from the link http://www.datsi.fi.upm.es/~frosal/sources/ and extract the GZIP compressed tar archive file. Here I am going to use the 3.8.9 version.
Note: I used Ubuntu 14.04 for this example.
If make is not installed, then install make
# apt-get install make
Go inside the shc-3.8.9 source folder.
# cd shc-3.8.9
# make


clip_image001
Now install shc
#make install
clip_image003
If installation fails with directory not found error, create the /usr/local/man/man1 directory and run the command again.

#mkdir /usr/local/man/man1
# make install

clip_image004
Remove the shc source folder after it is installed
# cd ..
# rm -rf shc-3.8.9/

Our shc tool is installed, we are now going to convert our shell script into binary.
Go to the folder where the shell script is stored. My script name is mysql_backup.
Create binary file of the shell script using the following command:
# shc -f mysql_backup
shc command creates 2 additional files
# ls -l mysql_backup*
-rwxrw-r-- 1 pranab pranab 149 Mar 27 01:09 mysql_backup
-rwx-wx--x 1 pranab pranab 11752 Mar 27 01:12 mysql_backup.x
-rw-rw-r-- 1 pranab pranab 10174 Mar 27 01:12 mysql_backup.x.c
 
mysql_backup is the original unencrypted shell script.
mysql_backup.x is the encrypted shell script in binary format.
mysql_backup.c is the C source code of the mysql_backup file. This C source code is compiled to create the above encrypted mysql_backup.x file.
We will remove the original shell script (mysql_backup) and c file (mysql_backup.x.c) and rename the binary file (mysql_backup.x) into the shell script (mysql_backup).
# rm -f  mysql_backup.x.c
# rm -f mysql_backup
# mv mysql_backup.x mysql_backup
 
Now we have our binary shell script, the contents of this file can not be easily seen as it is a binary file.

Monday 20 February 2017

Making Pen Drive Write Protected

Making a pen drive read only is very device specific. It is mainly dependent on the Pen drive chip and manufacturer.
I was successful in making a Transcend JetFlash 8GB pen drive write protected.
clip_image002
First we will find out the chip details of the Pen Drive. For that we will use the ChipGenius utility.
Run the ChipGenius_v4_00_0026_b2.exe tool to identify the Pen Drive chip.
In my case the chip manufacturer vendor is SMI and Part Number is SM3255AB.
clip_image004
Once I have the details of the Pen Drive chip I can look for specific tool for that Pen drive.
If the USB Pen drive has a Silicon Motion Inc. (SMI) controller inside it, then we can use the SMI utilities to alter some of the settings of the pen drive.
Luckily I found one link, where the author had already compiled a list of SMI tools
http://usb-fix.blogspot.in/p/smi.html
I searched tools for my pen drive chip SM3255AB in this blog. From this blog I was able to find a tool SMI ReFixInfo 1.0.0.1 which allows me to make my Pen drive write protected. I downloaded that file from the URL http://flashboot.ru/files/file/244/
The file I downloaded is SMI_ReFixInfo_1_0_0_1.7z. After downloading I extracted and run the executable SMI_ReFixInfo.exe
Once the pen drive is detected by the tool, we can change the various properties of the pen drive.
clip_image006
We are going to make the pen drive write protect, so select the Reset Write Protect check box and from the top W.P. list select Write Protect to make the pen drive write protected. To remove write protection, select Un-Write Protect option.
clip_image008
Once the required option is selected, click Start button to save the settings.
If the change is successfully saved, we can see the PASS message
clip_image010
Sometimes we may have to remove the Pen drive and connect again to see the new settings.
Now if we try to copy something to the Pen Drive, we will see the following message:clip_image012
If I try to delete some file from the pen drive… Ooopppsss there is no delete option. If we press the Delete key also, nothing will happen.
clip_image014