CopyDisable

Friday 14 December 2012

Ubuntu system time issue

Well, I would like to share a nice experience with you. One of our Ubuntu VM (running on Citrix Xen) had some problem, sometimes it’s root partition (which is a LVM volume) becomes read-only. On restart it stops with fsck checking the root partition and reporting error (as reported by Sysadmin team). On checking boot log I found

/dev/mapper/ET2012DevDB-root: Superblock last mount time (Thu Dec 13 10:44:16 2012,

                now = Fri Aug  3 03:04:03 2012) is in the future.

/dev/mapper/ET2012DevDB-root: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.

                (i.e., without -a or -p options)

/dev/xvda1: Superblock last write time (Thu Dec 13 14:55:07 2012,

                now = Fri Aug  3 03:04:03 2012) is in the future.

I found that this issue is related to time settings with the VM. Normally it happens because of time difference with the hardware (BIOS) clock.

As OS boots up it tries to mount the ROOT partition and it finds that the superblock of the partition has a future time, as it compares the time with the hardware clock. In the log you can find that the hardware clock is having time of August 2012 and right now we are in December 2012. So it finds that the superblock write time is in future. That’s why it starts the disk checking program fsck.

The date command was showing correct date and time, when I tried to check the hardware clock time

# hwclock --show
hwclock: Cannot access the Hardware Clock via any known method.
hwclock: Use the --debug option to see the details of our search for an access method.

On some searching I found that hwclock command does not work on guest VMs, as guest VMs cannot directly access the hardware clock.

So I talked to Sysadmin team and they changed the time of the VM (sync with in.pool.ntp.org).

Now after reboot, the VM started normally this time. On checking the boot.log this time I found:

/dev/mapper/ET2012DevDB-root: clean, 94294/425152 files, 791932/1698816 blocks

/dev/xvda1: Superblock last write time is in the future.

        (by less than a day, probably due to the hardware clock being incorrectly set).  FIXED.

 

Still there is some difference, but this time difference it is less than a day, so the file system check is ignored by the system.

So fine tuning the time this problem can be resolved Smile.

Wednesday 5 December 2012

df command to show long Filesystem name in a single line

I was writing a disk space monitoring script for one of my Ubuntu server. The script was as follows:
df -kh | grep -vE 'Filesystem|cdrom|tmp|non' |awk '{print $5" "$6}' | while read line
do
par=$(echo $line | awk '{print $2}')
per_of_use=$(echo $line | awk '{print $1}' | cut -f1 -d'%')
if [ $per_of_use -ge 85 ]
then
sub="Low disk space in "`hostname`" server"
echo "
Partition $par has low sapce.
$per_of_use % of it is in use.
Kindly free some space.
" | mail -s " $sub "
pranabksharma@gmail.com
fi
done


But when I run the script, it was failing. I checked and found that df command output was the culprit.
The output of df command was coming as
# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/SVNServer-root
                      4.5G  3.0G  1.3G  72% /
none                  995M  188K  995M   1% /dev
none                 1002M     0 1002M   0% /dev/shm
none                 1002M  356K 1002M   1% /var/run
none                 1002M  4.0K 1002M   1% /var/lock
/dev/sda1             228M   21M  196M  10% /boot
/dev/sdb1             7.9G  551M  7.0G   8% /home/data
//10.1.49.50/svn-et2012/
                      100G   49G   52G  49% /mnt/svnbackup


We can see that two lines of df command output were getting wrapped, because of the long filesystem names.
So the workaround for this problem will be to use column command along with the df command.
# df -khP | column -t
Filesystem                  Size   Used  Avail  Use%  Mounted         on
/dev/mapper/SVNServer-root  4.5G   3.0G  1.3G   72%   /
none                        995M   188K  995M   1%    /dev
none                        1002M  0     1002M  0%    /dev/shm
none                        1002M  356K  1002M  1%    /var/run
none                        1002M  4.0K  1002M  1%    /var/lock
/dev/sda1                   228M   21M   196M   10%   /boot
/dev/sdb1                   7.9G   551M  7.0G   8%    /home/data
//10.1.49.50/svn-et2012/    100G   49G   52G    49%   /mnt/svnbackup


So my disk space monitoring script will be
df -khP | column -t | grep -vE 'Filesystem|cdrom|tmp|non' |awk '{print $5" "$6}' | while read line
do
par=$(echo $line | awk '{print $2}')
per_of_use=$(echo $line | awk '{print $1}' | cut -f1 -d'%')
if [ $per_of_use -ge 85 ]
then
sub="Low disk space in "`hostname`" server"
echo "
Partition $par has low sapce.
$per_of_use % of it is in use.
Kindly free some space.
" | mail -s " $sub " pranabksharma@gmail.com
fi
done

Configuring Auto Scaling with Amazon EC2

One of the cool feature of Amazon cloud is that you can scale up and down your infrastructure as per the requirement changes, and that is also automatically based on some predefined conditions. Auto Scaling will automatically start new instances as our load grows and also terminates the instances as load comes down. All these happen depending on our configured settings. This makes auto scaling very effective and reduces administrative overhead.

In this example I will use Amazon cloud watch monitoring to monitor load on my instances and depending on CPU load, the Auto Scaling will work. To configure Auto Scaling we will need Auto Scaling Command Line Tool and Amazon CloudWatch Command Line Tool.

 

Setting up the pre-requisites:

We need to have Java installed to run these tools, also need to have JAVA_HOME set properly.

Extract the Auto Scaling and CloudWatch command line tools and copy it to some directory.

e.g. I copied to D:\AWS\AutoScaling-1.0.61.1 and D:\AWS\CloudWatch-1.0.13.4

Set the following environment variables (For this example I am going to use one Windows 7 system. So setting of the environment variables shown in this example are Windows specific) :

AWS_AUTO_SCALING_HOME = D:\AWS\AutoScaling-1.0.61.1

image

Add D:\AWS\AutoScaling-1.0.61.1\bin to you PATH variable.

 

AWS_CLOUDWATCH_HOME=D:\AWS\CloudWatch-1.0.13.4

Also add D:\AWS\CloudWatch-1.0.13.4\bin to your PATH variable.

We need to provide the command line tool with AWS user credentials. There are two ways of providing AWS user credentials: using AWS keys or using X.509 certificates. For this example I will use my AWS keys.

To get AWS keys login to your AWS account, go to Security Credentials –> Access Keys

image

 

Create a text file say aws-credential-file.txt. Copy the Access Key ID from your AWS account and paste to AWSAccessKeyId and copy the Secret Access Key and paste to AWSSecretKey in the file.

AWSAccessKeyId=<AWS Access Key ID>
AWSSecretKey=<AWS Secret Access Key>

 

Best way to inform the command line tool about this file is to create an environment variable AWS_CREDENTIAL_FILE and set the path of the file.

e.g. AWS_CREDENTIAL_FILE=D:\AWS\aws-credential-file.txt

 

By default, the Auto Scaling command line tool uses the Eastern United States Region (us-east-1). Suppose we need to configure our instances in a different region then we have to add another environment variable AWS_AUTO_SCALING_URL and set this variable with specific end point for that region.

If I want to set my region as Singapore, then I have to use

AWS_AUTO_SCALING_URL = https://autoscaling.ap-southeast-1.amazonaws.com

The list of Regions and Endpoints is available in this link http://docs.amazonwebservices.com/general/latest/gr/rande.html

 

First I will create the AMI that I will use for Auto Scaling

  • Select any public AMI that suits our requirement.
  • Start an instance with that AMI, we will deploy all the required applications and do all the configurations.
  • Create AMI from the running instance.

My AMI is created, note the AMI ID, this AMI ID will be used to start new instances.

image

 

I will create one load balancer for load balancing HTTP requests into the Auto Scaling instances.

image

image

image

image

image

 

We have to create a Key Pair to use with Auto Scaling. Key Pair can be created from the AWS web console

image

I am going to use the MKCL_New key for this example.

 

Next we need to have a security group, for this example I will use AutoScaling security group, that I created for this example.

image

 

Also I will configure SNS, so that I will get email alerts for Auto Scaling.

image

Go to SNS Dashboard and click on Create New Topic

image

 

image

 

Create new Subscription for the Topic we created

image

 

I am going to use the Email option.

image

 

image

I will receive one email which will contain a link for confirming my subscription for the topic. Click on the Confirm subscription link to confirm.

image

 

image

 

Now our prerequisites are ready and it’s the time to get our hands dirty with Auto Scaling.

Setting up Auto Scaling

Step1: Create a Launch Config

Launch config will tell what kind of instance will be launched. i.e. it specifies the template that Auto Scaling uses to launch Amazon EC2 instances.

C:\Users\pranabs>as-create-launch-config MyTestLC --image-id ami-b05417e2 --instance-type t1.micro --key MKCL_New --group AutoScaling

MyTestLC : The name of the launch config

--image-id : The AMI ID that we created for Auto Scaling

--instance-type: What type of AWS instance will be started with the AMI

--key: The keypair to be used to launch instance

--group: The name of the security group to be used

We can use as-describe-launch-configs command to see the launch configs.

C:\Users\pranabs>as-describe-launch-configs

image

Step2: Create an Auto Scaling Group

An Auto Scaling group is a collection of Amazon EC2 instances, here we can specify the different Auto Scaling properties like minimum number of instances, maximum number of instances, load balancer etc.

C:\Users\pranabs>as-create-auto-scaling-group MyTestASG --launch-configuration MyTestLC --availability-zones ap-southeast-1a --min-size 1 --max-size 4 --load-balancers MyLB

MyTestASG: The name of the Auto Scaling Group.

--launch-configuration: Name of the launch configuration we want to use with this Auto Scaling group.

--availability-zones: Name of the EC2 availability zone where we want to put our instances

--min-size: Minimum number of instance that should be running

--max-size: Maximum number of instances we want to create for this group

We can use as-describe-auto-scaling-groups command to see the Auto Scaling groups that I created.

C:\Users\pranabs>as-describe-auto-scaling-groups

image

After we run the as-create-auto-scaling-group command, one instance automatically starts. This is because we told the command that we want to maintain minimum 1 instance. So one instance is started immediately after running the command.

image

 

Step 3: Turn notification (Optional) for Auto Scaling

Now I will turn on email alert, whenever one new instance is launched and again when one instance of the Auto Scaling group is terminated.

C:\Users\pranabs>as-put-notification-configuration MyTestASG --topic-arn arn:aws:sns:ap-southeast-1:800762572860:pranabs_email --notification-types autoscaling:EC2_INSTANCE_LAUNCH, autoscaling:EC2_INSTANCE_TERMINATE

image

MyTestASG: The name of the Auto Scaling group for which want to enable notification

--topic-arn: Amazon Resource Name (ARN) for the email notification that we created. We can get the ARN from AWS console
2

--notification-types: These are the events on which notifications are sent.


We can use as-describe-auto-scaling-notification-types command to see the available types.

image

I will receive one test notification email after I turn on the notification for my Auto Scaling group.

1

 

Now we will create policies to scale up and scale down number of instances for this Auto Scaling group depending on the CPU usage. If the CUP usage goes above 80%, then Auto Scaling will create a new instance.

Step 4: Create Scale Up Policy

We will use the as-put-scaling-policy command to create the scale up policy.

C:\Users\pranabs>as-put-scaling-policy MyScaleUpPolicy --auto-scaling-group MyTestASG --adjustment=1 --type ChangeInCapacity  --cooldown 120
arn:aws:autoscaling:ap-southeast-1:800762572860:scalingPolicy:0c8f7361-a33f-4339-b0b3-8e94cebd3773:autoScalingGroupName/MyTestASG:policyName/MyScaleUpPolicy

image

MyScaleUpPolicy: The name of this scaling policy

--auto-scaling-group: Name of the Auto Scaling group in which this policy will be applicable

--adjustment: How many instances are to be created if the threshold (e.g. CPU usage above 80%) reached.

--type: It is in capacity, also we can specify PercentChangeInCapacity

--cooldown: Number of seconds between a successful scaling activity and the next scaling activity

Copy the output of the command, as we will need to use the output in the cloudwatch monitoring command.

 

Step 5: Create Scale Down Policy

C:\Users\pranabs>as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group MyTestASG "--adjustment=-1" --type ChangeInCapacity  --cooldown 120
arn:aws:autoscaling:ap-southeast-1:800762572860:scalingPolicy:ee397087-3c15-4f9c-baa2-dc1ad72ab9a0:autoScalingGroupName/MyTestASG:policyName/MyScaleDownPolicy

image

If you are running this command from Windows system, then enclose the --adjustment=-1 in double quotes. Otherwise you may get the following error:

C:\Users\pranabs>as-put-scaling-policy MyScaleDownPolicy --auto-scaling-group MyTestASG --adjustment=-1 --type ChangeInCapacity  --cooldown 120
as-put-scaling-policy:  Malformed input-MalformedInput
Usage:
as-put-scaling-policy
        PolicyName  --type  value  --auto-scaling-group  value  --adjustment
       value [--cooldown  value ] [--min-adjustment-step  value ]
        [General Options]
For more information and a full list of options, run "as-put-scaling-policy --help"

Now we will bind these two policies with CloudWatch monitoring

 

Step 6: Bind Scale Up policy with AWS CloudWatch monitoring

C:\Users\pranabs>mon-put-metric-alarm HighCPU  --comparison-operator  GreaterThanThreshold --evaluation-periods  1 --metric-name  CPUUtilization  --namespace  "AWS/EC2"  --period  120 --threshold  80 --alarm-actions arn:aws:autoscaling:ap-southeast-1:800762572860:scalingPolicy:0c8f7361-a33f-4339-b0b3-8e94cebd3773:autoScalingGroupName/MyTestASG:policyName/MyScaleUpPolicy --dimensions "AutoScalingGroupName=MyTestASG" --statistic Average --region ap-southeast-1
OK-Created Alarm

image

HighCPU: Name of the cloudwatch alarm

--comparison-operator: Specifies how to compare

--evaluation-periods: Number of consecutive periods for which the metric value has to be compared to the threshold

--metric-name: Name of the metric on which this alarm will be raised.

--namespace: Namespace of the metric on which to alarm., default is AWS/EC2

--period: Period of metric on which to alarm

--threshold: This is the threshold with which the metric value will be compared.

--alarm-actions: Use the output of the step4 Scale Up policy.

--dimensions: Dimensions of the metric on which to alarm.

--statistic: The statistic of the metric on which to alarm.Possible values are SampleCount, Average, Sum, Minimum, Maximum.

--region: Which web service region to use.

 

Suppose we are not using us-east-1 region, then we have to specify the –region option (or set the environment variable 'EC2_REGION'). Otherwise we will get the following error:

C:\Users\pranabs>mon-put-metric-alarm HighCPU  --comparison-operator  GreaterThanThreshold  --evaluation-periods  1 --metric-name  CPUUtilization  --namespace  "AWS/EC2"  --period  120 --threshold  80 --alarm-actions alarm-actions arn:aws:autoscaling:ap-southeast-1:800762572860:scalingPolicy:0c8f7361-a33f-4339-b0b3-8e94cebd3773:autoScalingGroupName/MyTestASG:policyName/MyScaleUpPolicy --dimensions "AutoScalingGroupName=MyTestASG" --statistic Average
mon-put-metric-alarm:  Malformed input-Invalid region ap-southeast-1 specified.
Only us-east-1 is
supported.
Usage:
mon-put-metric-alarm
        AlarmName  --comparison-operator  value  --evaluation-periods  value
        --metric-name  value  --namespace  value  --period  value  --statistic
       value  --threshold  value [--actions-enabled  value ] [--alarm-actions
       value[,value...] ] [--alarm-description  value ] [--dimensions
       "key1=value1,key2=value2..." ] [--insufficient-data-actions
       value[,value...] ] [--ok-actions  value[,value...] ] [--unit  value ]
        [General Options]
For more information and a full list of options, run "mon-put-metric-alarm --help"

 

Step 7: Bind Scale Down policy with AWS CloudWatch monitoring

C:\Users\pranabs>mon-put-metric-alarm LowCPU  --comparison-operator  LessThanThreshold --evaluation-periods  1 --metric-name  CPUUtilization --namespace "AWS/EC2"  --period  120 --statistic Average --threshold  40 --alarm-actions arn:aws:autoscaling:ap-southeast-1:800762572860:scalingPolicy:ee397087-3c15-4f9c-baa2-dc1ad72ab9a0:autoScalingGroupName/MyTestASG:policyName/MyScaleDownPolicy --region ap-southeast-1 --dimensions "AutoScalingGroupName=MyTestASG"
OK-Created Alarm

image

To check the alarms that we created use the command mon-describe-alarms

image

Voila…. our Auto Scaling setup is ready. Now it is the time to test the setup.

We can check the Auto Scaling group, right now it has only one instance.

image

We can see that this instance is connected to our load balancer.

image

 

We will connect to that instance with putty and will produce some CPU load. I will use the stress utility (http://weather.ou.edu/~apw/projects/stress/) to generate CPU load.

I used apt-get command to install stress.

# apt-get install stress

image

CPU usage is rocketing…

image

If I check the instances in my AWS console, I can see two instances now. One new instance is created by the Auto Scaling.

image

Checking the Auto Scaling group from command prompt, now we can see two instances in this group.

image

Also in my mail client, I got SNS alert mail for this instance launch.

image

Now again increasing CPU load on the new instance.

I am getting mail alerts for Auto Scaling.

image

 

I can see total four instances are launched (which is maximum for my Auto Scaling group).

image

image

 

Now I will scale down, I am killing all the stress processes, so that the CPU load comes down. Now we can see the three instances are terminated as we reduced the CPU load.

image

 

I am getting mail alerts as instances are getting terminated.

image

Our Auto Scaling testing is successfully over, so it is the time for cleanup.

 

Cleanup Process

First I will remove all the instances of this Auto Scaling group.

C:\Users\pranabs>as-update-auto-scaling-group MyTestASG --min-size 0 --max-size 0
OK-Updated AutoScalingGroup

image

After this command we can see all the instances are terminated

image

 

Delete the Auto Scaling group

C:\Users\pranabs>as-delete-auto-scaling-group MyTestASG

image

On deleting the Auto Scaling group, I will get email alert.

image

Delete the launch config

C:\Users\pranabs>as-delete-launch-config MyTestLC

image

Delete the alarms

C:\Users\pranabs>mon-delete-alarms HighCPU LowCPU

image

Delete the load balancer

image

 

image

 

That’s it from my side for today. Thanks for reading.