Friday, 23 October 2015

lsyncd to sync two directories

I have to sync two directories, from production to DR server. For that first choice was to use rsync command only, but rsync command has to be run manually or scheduled using cron. But there is a tool lsyncd that I have been using from a long time to get away from manual sync or scheduled sync. Lsyncd watches a local directory trees event monitor interface and perform certain actions when files change. lsyncd waits for few seconds and aggregates the change events and runs one or more processes to synchronize the changes.
Note: For this document, I am using Ubuntu 14.04
Installing lsyncd is very easy:
apt-get install lsyncd
For security requirements, I want to run the lsyncd as non-root user and sync the folders as a normal user (say admino). I am going to sync the folder /app/node/web from my production server (source) to my DR server (destination).
Login to production server as admino user and generate ssh keys without password (I am going to use passwordless ssh authentication)
$ ssh-keygen -t rsa
Copy the key to the DR server
$ ssh-copy-id
We are ready with passwordless ssh authentication, now I will configure lsyncd.
As I want to run the lsyncd daemon as the admino user (using which I will sync), I am going to change the /etc/init.d/lsyncd script
I changed the PID location
Next I specified that lsyncd daemon should be started as admino user, for that I have added the start-stop-daemon option --chuid
start-stop-daemon --chuid admino:admino --start --quiet --pidfile $PIDFILE --exec $DAEMON \
--test > /dev/null \
|| return 1
start-stop-daemon --chuid admino:admino --start --quiet --pidfile $PIDFILE \
--nicelevel $NICELEVEL --exec $DAEMON -- \
|| return 2
Create log directory for lsyncd
mkdir /var/log/lsyncd
Create the log files
touch /var/log/lsyncd/lsyncd.{log,status}
Now I will create the configuration file of lsyncd, create the config directory
mkdir /etc/lsyncd

Note: In Ubuntu, the default location for config file is /etc/lsyncd/lsyncd.conf.lua. It is specified in /etc/init.d/lsyncd script:
Create the config file /etc/lsyncd/lsyncd.conf.lua with contents like:
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
sync {
source = "/app/node/web",
host = "",
targetdir = "/app/node/web"
That’s it and we are done. For this example, I have shown the simplest lsyncd configuration, there are lots of options available with lsyncd, visit lsyncd home page for details.

Monday, 22 June 2015

Running node.js application with Passenger and Nginx

We can use Phusion Passenger to deploy our node.js application on production, it takes out many complications that come while deploying or running Node.js application. Using Passenger is very easy and some of the benefits that we get from using Passenger are:
1) We have nginx to serve the static contents, so our app can concentrate of serving our main purpose.
2) Ngnix also protects our application from some kind of attacks.
3) Automatic restart of our application on system reboots.
4) We can run multiple Node.js applications on a single server easily.
5) Phusion Passenger can spawn more Node.js processes depending on the load etc.
6) Automatic starts a new process for a failed process.
First we are going to install node.js
Note: For this demo I used Ubuntu 14.04


Installing node.js

# curl -sL | sudo bash -
Now run apt-get to install node.js
# apt-get install nodejs



Installing Ngnix

# apt-get install ngnix



Installing Passenger

Install passenger PGP key.
# apt-key adv --keyserver --recv-keys 561F9B9CAC40B2F7
We need to add HTTPS support for apt as passenger apt repository is hosted on an HTTPS server.
# apt-get install apt-transport-https ca-certificates
Create a file /etc/apt/sources.list.d/passenger.list and insert the following line:
# Ubuntu 14.04
deb trusty main

Note: the above line is specifically for Ubuntu 14.04, if you are not using Ubuntu 14.04 then please refer Passenger document to get the URL for your distribution.
After adding the new apt source, run apt-get update followed by apt-get install
# apt-get update
# apt-get install nginx-extras passenger
Once passenger installation is done, edit /etc/nginx/nginx.conf and uncomment the following lines: 

passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/passenger_free_ruby;
You can verify the passenger root is correct or not by the command:
# /usr/bin/passenger-config --root
Installation part is over, not I am going to deploy one application to test the new setup.
For this demo I wrote a small app test.js

var http = require("http");
function onRequest(request, response){
var body = '<html>'+
'<meta http-equiv="Content-Type" content="text/html; '+
'charset=UTF-8" />'+
'Hi I am SuperMan'+
'<img src=monkey1.gif>'+
response.writeHead(200, {"Content-Type" : "text/html"});
console.log("Server has started.");
As suggested in the passenger documentation , I have created the application directory structure:
  +-- test.js
  +-- public/
  +-- tmp/
/apps is the root directory of my application

test.js file is the entry point of my application. Passenger will load test.js to start my application.
public : This directory contains static file, so files placed in this folder will directly serve by Nginx, request will not be forwarded to the application. I copied one image file monkey1.txt in this directory.
tmp : We can create restart.txt file in this directory to restart the application on next request. Also this directory can be used by the application also.    
In this example I am going to run my Node.js application as my default site. So I am editing the file /etc/nginx/sites-available/default and my file looks like:

server {
listen 80 default_server;
listen [::]:80 default_server;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
passenger_app_root /apps;
passenger_enabled on;
passenger_app_type node;
passenger_startup_file test.js;

#This is where my static files will go
root /apps/public;
All set, restart nginx and we are ready.
# service nginx restart

My superb website is up and running Smile Smile Smile

Friday, 19 June 2015

Using PM2 process manager for node.js

PM2 (Process Manager 2) is a process manager for Node.js applications. Some of its features are:
· PM2 allows us to run our application as multiple processes, without having to modify our application.
· It has built-in load balancing capabilities for running node.js application in cluster.
· It keeps the application up by restarting the application if it crashes.
· It can start the node application as a service when we restart the server.


Note: For this post I used Ubuntu 14.04


Installing node.js

Run the below command as root user or use sudo
# curl -sL | sudo bash -
Now run apt-get to install node.js
# apt-get install nodejs

Update 22/07/2015:

Suppose you want to install a different version of nodejs (at the time of writing this post the default setup was 0.10), for example suppose if I want 0.12, then go to the NodeJS github link:

There you will get setup scripts for different versions of NodeJS, for my requirement of version 0.12 the script is setup_0.12, from the script get the link of the script and run the curl command:

curl -sL | sudo bash -

After that run apt-get install nodejs command.


For testing cluster I created a small application test.js, this application shows the Process ID of the process which served the request:
For this application I need process module, install process module
My test.js file:
var http = require("http");
var process = require('process'); 

function onRequest(request, response){
response.writeHead(200, {"Content-Type" : "text/html"});
response.write('Request served by: ' +;

console.log("Server has started.");
The sample output of this app:
Now I am going to install pm2 and run my test.js application using pm2.

Installing PM2:

$ sudo npm install pm2 -g


Running our application with pm2

Now we will start our test application using pm2 and will run multiple processes to form our cluster.
$ pm2 start test.js --name "testapp" -i 0
--name : This will specify a name for our application. This name can be used in other pm2 commands.
-i : Start the application in cluster mode, and the number of processes to run. A value of 0 informs pm2 to start as many worker processes as you have CPU cores.
So in our case we have 2 core CPU, so pm2 started 2 processes for our application.

Monitoring our application:

We can use the following pm2 commands to monitor our application:
Get list of applications monitored by pm2:
$ pm2 list
To get more details about an application:
$ pm2 show testapp
This will return details of all the processes running for that application. To get more details about a process, we can use the pm2 id for that process
$ pm2 show 1
Same output we can get using the pm2 desc command.
Monitoring the processes with CPU and RAM usage:
$ pm2 monit
The blue bar shows CPU usage and red one shows RAM usage for a particular process.

Checking logs of all the monitored processes:

$ pm2 logs
pm2 logs command shows tail command kind of output from all the log files. Log files for each individual processes are generated in PM2_PROCESS_USER_HOME_DIRECTORY/.pm2/logs directory.
So in my case /home/pranabs/.pm2/logs
To clean up the log files:
$ pm2 flush
To check if the cluster is working fine, I open my testapp in browser which displays the PID of the worker process which served the request.



If I keep refreshing the page, I could see different PIDs. If I check the displayed PIDs, I could find these PIDs in pm2 list command. This means that the user requests are being served from different processes which indicates some sort of load balancing is working in our cluster.


Restarting application:

pm2 reload <APP_NAME/all>
To restart all the monitored applications:
$pm2 reload all
To restart a particular application:
$pm2 reload testapp


Stopping application:

We can stop particular process/application or all applications
$ pm2 stop testapp



To remove application from pm2:

We can delete particular process/application or all applications from pm2
$ pm2 delete testapp



Checking auto start of failed process:

pm2 automatically starts a failed process. To verify that we will forcefully kill one of the processes and check whether pm2 starts another process for the failed one. Here I killed the process 5910, and we could see in the below screenshot that pm2 had started another process with ID 5939.




Starting applications automatically at server boot time

We can run the command pm2 startup to create the init script and deploy that init script to run at system startup.
To auto detect the platform just run pm2 startup:
$ pm2 startup
Also we can specify the platform with the startup command:
$ pm2 startup [platform]
The available options for platform are ubuntu, centos, redhat, gentoo, system, darwin and amazon
As I am using Ubuntu, so I am specifying ubuntu
$ pm2 startup ubuntu
This command will give us a command to run as root, which will actually deploy the init script at system startup.
sudo env PATH=$PATH:/usr/bin pm2 startup ubuntu -u pranabs
If we want to run the application as root, then we can directly run the pm2 startup command as root and it will deploy the necessary startup scripts.
Next run pm2 save command so that pm2 saves the currently monitored processes. The saved processes will be automatically started by pm2 when the system boots.
$ pm2 save
PM2 is very convenient tool for running node.js applications and the clustering is a very powerful feature. So try and enjoy pm2 Smile .

Wednesday, 7 January 2015

Linux Out of Memory Process Killer

Linux OS has an Killer….. Oooopppssss…. don’t afraid…. its just the "Out of Memory" killer facility which kills running processes when the system runs out of free memory. When the Linux system runs out of memory then the kernel starts killing processes in order to stay operational. The Linux kernel uses a mechanism called Out Of Memory Killer (or OOM Killer) for recovering memory on the system and overcome memory exhaustion.

In one of my server running LAMP stack sometimes MySQL server was getting terminated abruptly. Actually MySQL was getting killed by the OOM killer. MySQL memory pools were optimized but actually the server physically had low memory and there was no possibility of increasing memory of the server. I could afford other processes (like Apache) getting killed but have to prevent MySQL database server from getting killed.

Normally Linux OOM killer treats all processes equally, but there is a way to control the behavior of OOM Killer. Each Linux process has a OOM score assigned to it. Whenever system is about to run out of memory, OOM killer terminates the program with the highest score.

One way is to adjust the value of the file /proc/[process_id]/oom_adj (since Linux kernel 2.6.11). The valid range is –16 (very unlikely to be killed by the OOM killer) to 15 (very likely to be killed by the OOM killer) and a value of –17 exempts a process entirely from the OOM killer.

So we can do as root user:
# echo –17 > /proc/MySQL_Process_ID/oom_adj
to keep MySQL process out of reach of the OOM killer.

Since Linux 2.6.36, use of the file /proc/[process_id]/oom_adj  is deprecated in favor of the file /proc/[process_id]/oom_score_adj
The range of values which oom_score_adj accepts is from integer -999 (very unlikely to be killed by the OOM killer) up to 1000 (very likely to be killed by the OOM killer) and a value of –1000 exempts a process entirely from the OOM killer.

So in this case we have to set:
# echo -1000 > /proc/MySQL_Process_ID/oom_score_adj
to prevent MySQL getting killed.
But above two techniques are temporary, whenever we restart MySQL or 
the Server the Process ID of MySQL process changes and again we have to run the above command.
To permanently exempt MySQL from getting killed, we can edit the MySQL service’s upstart script file 
/etc/init/mysql.conf and add the parameter 
oom score 
The value of this parameter can be an integer ranging -999 (very unlikely to be killed by the OOM killer) to 1000 (very likely to be killed by the OOM killer). It may also have a special value never which instructs the OOM killer to ignore this process entirely.

So for my MySQL database server running on Ubuntu 12.04, I edited the upstart script /etc/init/mysql.conf and added the line:

oom score never

After that restart MySQL service and its done :) .

Lets check the values that /proc/MySQL_Process_ID/oom_score_adj and /proc/MySQL_Process_ID/oom_adj files have after setting oom score never

Yeppp… it is as expected :) :) :)