CopyDisable

Thursday 8 December 2011

NetScaler DataStream

The NetScaler DataStream is a new feature introduced in version 9.3 and it provides an intelligent mechanism for request switching at the DB layer, it distributes DB requests based on the SQL query being sent. The main benefits of this technology are:
  • Connection Multiplexing: Database requests are connection based, Netscaler takes multiple client side connections and uses connection multiplexing to enable multiple client side requests to be made over the same server side connection. That means we will have much lesser number of connections in database server side.
  • Load Balancing of SQL requests: We can use many load balancing algorithms to send SQL queries to the available servers. Most commonly used algorithms are Least Connections and Least Response Time.
  • Content Switching: It’s an awesome option, using it we can inspect the SQL content and depending on our specified rule, requests are distributed amongst different servers.

In this post I am going to show a simple read-only database scale out using NetScaler DataStream. I have a couple of replicated MySQL slaves and I want to load balance the select queries amongst these MySQL slaves. I will configure a JDBC connection pool in my Glassfish Cluster and that pool will connect to the NetScaler LoadBalancer instead of directly connecting to a database server. The setup is shown in the below image:
Drawing2

First I will configure all the MySQL slaves in Netscaler.
Add the all the MySQL slave servers. Go to Load Balancing –> Servers
image
Enter the name for the server and IP address for that server.
image

After adding all the MySQL servers, I am going to add the MySQL services running in these servers. Go to Load Balancing –> Services
image

Enter the name for the service, select the MySQL server we added in the previously, select Protocol MySQL and the port on which MySQL is running in this server, in my case it is the default MySQL port 3306.
image
Same way create services for all the MySQL servers that we want to run.

Now I will create the virtual server for these MySQL services. Go to Load Balancing –> Virtual Servers
image

Enter the name for this virtual server. Enter the IP Address, select the protocol MySQL and enter the port, here I have entered the default port for MySQL. Select the MySQL slave services that we want to add to this virtual server. Select the Load Balancing method as per your requirement, in the Method and Persistence tab.
image

So we are ready with our MySQL virtual server. Right now we can start using it, but thing is that at this point all my MySQL services are using default TCP monitors. The good thing with Netscaler is that we can create monitors for monitoring MySQL services. For DataStream, Netscaler has monitor type MYSQL-ECV. Using this type of monitor we can send SQL request and parse the response of the request and then decide the state of the service. I will create a simple MYSQL-ECV type monitor which will check the replication status in a MySQL slave server. If there is any error with the slave, it will mark the service as DOWN.

First I have to add a database user. Using this database user’s username and password my MySQL-ECV monitor will connect to the database server. Go to System –> Database Users
image
Enter the database username and password.
image

Then I will create the MySQL-ECV monitor. Go to Load Balancing –> Monitors
image

Enter the name of the monitor, select the type as MYSQL-ECV. Also enter the other parameters as per the requirement.
image

Now the main part of the monitor is the Special Parameter section. Go to the Special Parameters tab
image
Enter the database name (actually to check slave status we do not have to select any database name, but here this field is compulsory, so I have entered mysql).
Enter the query that will be sent to the MySQL server for response. To check MySQL slave status enter show slave status
Enter the username using which this monitor will connect the database server (this user must have privileges to run the query we entered in the Query field). I have entered the username pranab that I created in the previous step.
Now we have to write the rule based on which we will decide whether the service is UP or DOWN. I wrote the rule as:
!mysql.RES.ROW(0).IS_NULL_ELEM(32)
mysql.RES –> It operates on MySQL response
ROW(0) –> First row sent by MySQL (row index starts from 0)
IS_NULL_ELEM(32) –> Checks whether the 33rd column (column index starts from 0) of the 1st row is NULL, and depending on that returns TRUE or FALSE.
The reason I selected the 1st row’s 33nd columns is, I am inspecting whether the Seconds_Behind_Master column value is NULL (if replications stops this becomes NULL otherwise it gives number of seconds).
Untitled-2
The rule I wrote will return true if Seconds_Behind_Master is not NULL hence it will mark the service status as UP. If Seconds_Behind_Master is NULL then this rule will return false and the service will be marked as DOWN.
I tried to check the value of Seconds_Behind_Master using the rules like mysql.RES.ROW(0).NUM_ELEM(32) == 0 or mysql.RES.ROW(0).NUM_ELEM(32) < 10 , but that didn’t work for me.
Create this monitor and add it to all the MySQL slave services. That’s all I have to do at the Netscaler end. Final thing I have to do it to change the connection pool settings in my Glassfish Cluster and configure the IP of the MySQL virtual server we created in the Netscaler.
Note: The MySQL monitor that we created in Netscaler will use the database user’s credential that we specified while creating the monitor. This database user has to be created in Netscaler before creating the monitor. But the client queries that Netscaler receives will be sent to the database server using the username that was sent by the client.

Wednesday 23 November 2011

Using head command in Windows

Recently I had to read first few lines of a very large sql dump file (of size 15GB) in one Windows server. It’s quite easy in a Linux box, but I was in mess how to do it in the Windows system. After some searching I found CoreUtils. It’s a collection of some small handy utilities for basic file, shell and text manipulation and these utilities mainly exist in GNU operating system. You can download CoreUtils for Windows from  http://gnuwin32.sourceforge.net/packages/coreutils.htm. I have downloaded the zip archive coreutils-5.3.0-bin.zip in my Windows 2003 server machine and extracted in D:\coreutils-5.3.0-bin . Now lets see if I can run the head command
image

Oops some error, I need to download the dependency files also.
image

Now I have downloaded the dependency archive file coreutils-5.3.0-dep.zip. Extract the archive and copy the libiconv2.dll and libintl3.dll files into the coreutils bin folder. Now lets try again
image

Yeeppiiiii….. this time it worked. Now I can run the head command in Windows also. This is very helpful in reading large log files in Windows server using the very useful command like head (in Windows using more command we may alternate the head command). But I am always comfortable with head command and it was a big relief for me :) .

Tuesday 1 November 2011

Glassfish 3.1 Cluster in Windows environment

While planning glassfish 3.1 cluster deployment one of the major differences I found from an administrator perspective between cluster support in Glassfish v3.1 and Glassfish v2 is that 3.1 do not support node agent. Node agent is the process that controls the life cycle of server instances. Normally one node agent runs per box. The 3.1 release have SSH capability and that enables centralized management of a cluster. The functionality provided by a node agent is now can be performed by the use of SSH. Personally I found it more exciting than using node agent as SSH is very well established standard and it eliminates the dependency of a different process (node agent) in each node (this means I do not need to memorize the node agent related commands :) ) .
As we are using Windows OS in our application servers, so first thing that came into my mind was how to deploy the Glassfish 3.1 cluster, as it will need SSH to communicate (it’s possible to deploy the cluster without using SSH also). Then my old rescuer Cygwin (http://www.cygwin.com/) came into my mind, and yes we can deploy Glassfish cluster in Windows using Cygwin. In this blog post I will show you how to create a simple two node cluster with two instances in two Windows 2003 servers (servers are named Glassfish1 and Glassfish2) with the help of Cygwin.
The setup will be as shown in the diagram:
clip_image002
Install JDK:
First I will install JDK (minimum supported is JDK 1.6.0_22) in all the servers.
clip_image004
Install Cygwin:
After that I will install Cygwin, download it from http://www.cygwin.com/ and run the setup.
clip_image006
The setup will ask to download the packages, for the first time I will download it from Internet and later I can use the downloaded files in other installations.
clip_image008
Select the root directory for cygwin installation.
clip_image010
Select a directory where the downloaded files will be stored.
clip_image012
Select the Internet connection type.
clip_image014
Select the site from where you will download the files, it’s wise to select a site close to your location.
clip_image016
clip_image018
Select the packages you want to install, here make sure that you have selected the SSH related packages.
clip_image020
clip_image022
clip_image024
Cygwin downloads the packages in the directory we have selected
clip_image026
clip_image028
After installation is completed, we have to setup the SSH server.
Enter the command ssh-host-config in the cygwin command prompt and follow the onscreen instructions
clip_image030
clip_image032
After SSH setup is done, start the SSH daemon using the command cygrunsrv –S
clip_image034
Once Cygwin is installed in each SSH node, we are ready to create our cluster.
Install Glassfish and DAS:
First I will install the Domain Admin Server (DAS) in my first server Glassfish1, run the Glassfish 3.1 setup
clip_image036
clip_image038
Select Custom Installation
clip_image040
clip_image042
clip_image044
clip_image046
clip_image048
clip_image050
clip_image052
clip_image054
clip_image056
clip_image058
clip_image060
This installs my DAS server and creates the domain.
Optionally we can turn remote administration of the domain using the command
asadmin enable-secure-admin
clip_image062
This command enables remote administration and encrypts all admin traffic. If we are not using SSH, it enables to set up a remote instance on a different host from where the DAS is running.
Setup SSH authentication:
When a SSH client connects to a SSH service, it needs to authenticate the connecting user. So before starting our remote access, we have to setup SSH Authentication. Glassfish 3.1 supports three types of SSH authentication
1. Username/Password
2. Public key without encryption
3. Public key with passphrase-protected encryption
We are going to use Public Key Authentication without Encryption and for that Glassfish provides a command setup-ssh. The setup-ssh subcommand generates a key pair and distributes the public key file to specified hosts.
I will run the setup-ssh command from my DAS host Glassfish1 and it will setup SSH authentication with the remote host Glassfish2.
cmd> asadmin setup-ssh Glassfish2
clip_image064
Install Glassfish on remote node:
First of all we have to install Glassfish in the server Glassfish2.
Two ways we can do the installation:
1. Install manually by running Glassfish setup on the server Glassfish2
2. Installing remotely from DAS server using the install-node command. This command will create an image of the Glassfish installation of DAS and then install that image on Glassfish2 server.
I will use the second method in my example. So run the command in DAS
cmd> asadmin install-node –installdir D:/Glassfish3 Glassfish2
This command will install Glassfish in d:\Glassfish3 directory in Glassfish2 server.
clip_image066
Create Nodes:
Now I will create the nodes of the cluster
There are two types of nodes
SSH nodes: An SSH node communicates over secure shell (SSH) and if we wish to administer the GlassFish instances centrally, then instances must reside on SSH nodes.
CONFIG nodes: This type of node does not support remote communication. If we want to administer our instances locally, then our instances can reside on CONFIG nodes. The DAS comes with one CONFIG node named localhost-<domain name>; in our case it is localhost-domain1. This node is already created and we do not have to create it.
As we already have one CONFIG node, I am going to create a SSH node in the second server Glassfish2.
Run the following command in DAS
cmd> asadmin create-node-ssh –nodehost Glassfish2 –installdir d:/Glassfish3 node2
This command will create one SSH node named node2 in the Glassfish2 server. Here we have to specify the host on which the node will be created, Glassfish installation directory and the name of the node that will be created.
clip_image068
Both our nodes are created, we can see the nodes in the admin console. Here we can see the two nodes localhost-domain1 and node2.
clip_image070

Create the cluster:
So, at this point we are ready to create the cluster. In the admin console, go to Clusters and click on the New button. Enter the name of the cluster and click OK to create the new cluster. Here we are creating a cluster named solar.
clip_image072
Create the instances of the cluster:
Now I will add the instances of the solar cluster. Go to Clusters -> solar and go to the tab Instances. Click on New to add.
clip_image074
Enter the name of the instance and the node where this instance will reside. Here I will create two instances, one will reside in the local node localhost-domain1 and second one will reside in node2. The first instance I will create is instance2 and it will reside on node2.
clip_image076
Our instance is created and currently it is not running.
clip_image078
Creating the second instance
clip_image080
Once the both instance is created, start the instances and we are ready with our cluster and we can deploy our web applications in cluster.
Configurations that are to be done at the application side:
Change in web.xml file
<distributable/> element is to be added in the web.xml file. This tag signifies that the web application is suitable for running in a distributed environment i.e. in a cluster.
Addition on new elements in glassfish-web.xml:
Following elements are added to the glassfish-web.xml file
<session-config>
<session-manager persistence-type="replicated">
<manager-properties>
<property name="relaxCacheVersionSemantics" value="true"/>
<property name="persistenceFrequency" value="web-method"/>
</manager-properties>
<store-properties>
<property name="persistenceScope" value="session"/>
</store-properties>
</session-manager>
<session-properties/>
<cookie-properties/>
</session-config>
· <session-manager persistence-type="replicated"> is added to specify the use of in memory replication in the cluster.
· <property name="persistenceFrequency" value="web-method"/> specifies that after each request the session gets replicated.
· <property name="persistenceScope" value="session"/> specifies that on each request the whole HTTPSession gets replicated in all the instances of the cluster.
· If in the application multiple client threads concurrently access the same session ID, then add <property name="relaxCacheVersionSemantics" value="true"/> in the glassfish-web.xml file. This enables the web container to return for each requesting thread whatever version of the session that is in the active cache regardless of the version number. Otherwise without any instance failure you may experience session loss.
Application deployment in cluster:
Go to the cluster and go to the Applications tab. Click on Deploy button
clip_image082
Make sure that the Availability checkbox is enabled.
clip_image084
Also verify that in the Selected Targets list the cluster name is there.
clip_image086
Configuring the Glassfish cluster in Citrix Netscaler load balancer:
Go to Load Balancing -> Servers
clip_image088
Add both the physical servers (Glassfish1 and Glassfish2) in the load balancer. The Glassfish1 server of our example is named as Solarex2 and Glassfish2 server is named as SolarexCluster2
clip_image089
clip_image091
After adding the servers we have to add the services in these servers. The HTTP port for both our instances is 28080.
Go to Load Balancing -> Services
clip_image093
Adding service for the first instance:
clip_image095
Adding service for second instance for the cluster
clip_image097
Once services for both the instances are created, I will create the virtual server for my glassfish cluster
Go to Load Balancing -> Virtual Servers
clip_image099
I am going to name this virtual server as SolarexEFCluster, select both the services we have just created.
clip_image101
In the Method and Persistence tab, select the load balancing method as per the requirement, in my case I have selected the Least Connection method.
clip_image103
Now final task is to forward the client requests from our firewall to the load balancer virtual sever we have created.
So that’s all we need to do, and our Glassfish 3.1 cluster is ready for use :).