Wednesday, December 26, 2012

how to check rpm license without installing?

Command to find out installed rpm license information
   1) Find out the installed rpm name by running
       rpm -qa|grep -i <name-of-rpm>
       Ex: rpm -qa|grep -i post
             O/P: postfix-2.7.2-12.3.x86_64
   2) Query for additional information on that package, with below command
        rpm -qi <name of the installed rpm>
       It provides lots of information like Version, Description, License, etc. We can grep for required information like
        Ex: rpm -qi postfix-2.7.2-12.3.x86_64|grep "License:"
           O/P: Size        : 2685020                          License: IBM Public License ..

       As you can see the above command provided some extra information along with License. Is there a better way to do this? Yes.
        rpm -q --queryformat '%{name},%{version},%{license}\n' postfix-2.7.2-12.3.x86_64
       O/P: postfix,2.7.2,IBM Public License ..
       To fetch only license
       rpm -q --queryformat '%{license}\n' postfix-2.7.2-12.3.x86_64
       O/p:      IBM Public License ..

Command to find out rpm license information without installing it?
Now I assume you have the rpm file of your interest (Ex: postfix-2.5.13-0.17.4.x86_64.rpm). To know the licesnse information from it, run below command
rpm -qp --queryformat '%{name},%{version},%{license}\n' <rpm-file-name>
Ex:  rpm -qp --queryformat '%{name},%{version},%{license}\n' postfix-2.5.13-0.17.4.x86_64.rpm

Wednesday, December 12, 2012

kiwi build failure Can't provide file from repository 'update'

Our Kiwi builds were failing inconsistently with the below error
Failed to provide Package users-1-1.x86_64. Do you want to retry retrieval? 
Can't provide file 
'./rpm/x86_64/users-1-1.x86_64.rpm' from repository 'update'

Though the message was clearly suggesting issue with rpm download from local Suse repository. But manually we were able to download it without any issues. Even we suspected something fishy with zypper cache.

Finally from the post
https://groups.google.com/group/kiwi-images/tree/browse_frm/month/2012-09/03256b31dffdacfc?rnum=151&_done=/group/kiwi-images/browse_frm/month/2012-09?&pli=1
we got to know that, we need to clean following folders in our build machine
rm -rf /var/cache/kiwi/packages/ 
rm -rf /var/cache/kiwi/zypper 

Now our builds are consistently passing.

Wednesday, November 7, 2012

A quick tutorial on building appliance using Kiwi in restricted environment

In this blog, I'll share step by step information on building a appliance using opensuse kiwi tool on a restricted environment. Here restricted environment means in the network where access to internet is not available.

Step 1: Setting up build machine
     a ) Download OpenSuse iso file (like openSUSE-12.2-DVD-x86_64.iso)
     b) Install it either on a physical machine or virtual machine. Preferably allocate around 50 GB of disk space.
     c) Configure network. You can use yast. Also change firewall configuration to open ssh port and in /etc/ssh/sshd_config make the change 'PasswordAuthentication yes' to allow ssh connection.

Step 2: Configure Zypper repo
     a) Copy the openSUSE-12.2-DVD-x86_64.iso file to the newly setup build machine (in the location /root/tools/openSUSE-12.2-DVD-x86_64.iso)
     b) mount it
         mount -o loop /root/tools/openSUSE-12.2-DVD-x86_64.iso /mnt/openSUSE-12.2
     c) Add this iso as zypper repo
         zypper ar -c -t yast2 "iso:/?iso=/root/tools/openSUSE-12.2-DVD-x86_64.iso" "openSuSE 12"

Step 3: Installing kiwi tool set
    a) You can use rpms from the mounted iso to install kiwi.
        cd /mnt/openSUSE-12.2/suse/x86_64
        zypper --no-remote install kiwi
        zypper --no-remote install kiwi-templates
        cd /mnt/openSUSE-12.2/suse/noarch/
        zypper --no-remote install kiwi-desc-oemboot-5.03.37-1.1.1.noarch.rpm kiwi-desc-netboot-5.03.37-1.1.1.noarch.rpm kiwi-desc-oemboot-5.03.37-1.1.1.noarch.rpm kiwi-pxeboot-5.03.37-1.1.1.noarch.rpm kiwi-desc-isoboot-5.03.37-1.1.1.noarch.rpm

       The above step installs all the available templates, which will be used as base image while building appliance

Step 4: Building a Jeos iso image 
     a) mkdir /tmp/myjeos
     b) kiwi --set-repo /mnt/openSUSE-12.2 --build suse-12.1-JeOS --destdir /tmp/myjeos --type iso
      If no issues, you can find your appliance at /tmp/myjeos/LimeJeOS-openSUSE-12.1.x86_64-1.12.1.iso
     c) You can deploy it on VMWare ESX server to validate it.
     d) If deployed successfully, the login details are
          Login: root
          Passwd: linux

As shown above, we used the mounted OpenSUSE-12.2 iso  as repo and the kiwi template suse-12.1-JeOS has base template and built the appliance.

Tuesday, October 30, 2012

4 simple kiwi commands used to generate appliance in Suse Studio


Following are the 4 kiwi commands called by create_appliance.sh, which provided by Suse studio.

1) kiwi --prepare bootsource --root bootbuild/root --logfile boot-prepare.log
2) kiwi --create bootbuild/root -d bootimage/initrd --logfile boot-create.log
3) kiwi --prepare source --root build/root --logfile prepare.log
4) kiwi --create build/root -d image --prebuiltbootimage bootimage/initrd --logfile create.log

Following are the equivalent 4 kiwi commands run by Suse Studio internally
1) /usr/sbin/kiwi --gzip-cmd /usr/bin/pigz --prepare /studio/runner/bootsource/a29-0.0.8-vmx-x86_64
   --logfile /studio/runner/log/a29-0.0.8-vmx-x86_64/boot-prepare.log --root /studio/runner/bootbuild/a29-0.0.8-vmx-x86_64/root  --package-manager ensconce --ignore-repos
   --add-repo "http://127.0.0.1/repositories/SLES_11_SP1_x86_64" --add-repotype rpm-md
   --add-repo "http://127.0.0.1/repositories/SLE_11_SP1_SDK_x86_64" --add-repotype rpm-md
   --add-repo "http://127.0.0.1/repositories/SLES_11_SP1_Updates_x86_64" --add-repotype rpm-md
   --add-repo "http://127.0.0.1/repositories/SLE_11_SP1_SDK_Updates_x86_64" --add-repotype rpm-md

2) /usr/sbin/kiwi --gzip-cmd /usr/bin/pigz --create /studio/runner/bootbuild/a29-0.0.8-vmx-x86_64/root
  -d /studio/runner/bootimage/a29-0.0.8-vmx-x86_64/initrd --logfile /studio/runner/log/a29-0.0.8-vmx-x86_64/boot-create.log

3) /usr/sbin/kiwi --gzip-cmd /usr/bin/pigz --prepare /studio/runner/source/a29-0.0.8-vmx-x86_64 --root /studio/runner/build/a29-0.0.8-vmx-x86_64/root --ignore-repos
 --add-repo "http://127.0.0.1/repositories/SLES_11_SP1_x86_64" --add-repotype rpm-md
--add-repo "http://127.0.0.1/repositories/SLE_11_SP1_SDK_x86_64" --add-repotype rpm-md
--add-repo "http://127.0.0.1/repositories/SLES_11_SP1_Updates_x86_64" --add-repotype rpm-md
--add-repo "http://127.0.0.1/repositories/SLE_11_SP1_SDK_Updates_x86_64" --add-repotype rpm-md
--logfile /studio/runner/log/a29-0.0.8-vmx-x86_64/prepare.log 2>&1

4) /usr/sbin/kiwi --gzip-cmd /usr/bin/pigz --create /studio/runner/build/a29-0.0.8-vmx-x86_64/root -d /studio/runner/image/a29-0.0.8-vmx-x86_64 --logfile /studio/runner/log/a29-0.0.8-vmx-x86_64/create.log --prebuiltbootimage /studio/runner/bootimage/a29-0.0.8-vmx-x86_64/initrd 2>&1

Monday, October 29, 2012

Jenkins Failed to locate Cygwin installation. Is Cygwin installed?


You configured a new Windows  (64 bit) node in Jenkins  (either run as Windows servvice or JNLP or ssh) and when you try to execute a shell command from it, you may get the below mentioned error.

FATAL: command execution failedhudson.util.IOException2: Failed to locate Cygwin installation. Is Cygwin installed? at hudson.plugins.cygpath.CygpathLauncherDecorator$GetCygpathTask.getCygwinRoot(CygpathLauncherDecorator.java:122) at hudson.plugins.cygpath.CygpathLauncherDecorator$GetCygpathTask.call(CygpathLauncherDecorator.java:127) at hudson.plugins.cygpath.CygpathLauncherDecorator$GetCygpathTask.call(CygpathLauncherDecorator.java:112) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:287) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)Caused by: hudson.util.jna.JnaException: Win32 error: 2 - null at hudson.util.jna.RegistryKey.check(RegistryKey.java:124) at hudson.util.jna.RegistryKey.open(RegistryKey.java:223) at hudson.util.jna.RegistryKey.openReadonly(RegistryKey.java:218) at hudson.plugins.cygpath.CygpathLauncherDecorator$GetCygpathTask.getCygwinRoot(CygpathLauncherDecorator.java:115) ... 11 more

You can try the fix provided in the link https://issues.jenkins-ci.org/browse/JENKINS-6992

If you are still getting the same issue, then check in your Jenkins shell, whether you have
#!/bin/bash
in the first line.
If you have this, remove it. Your shell script will work fine.

Monday, October 15, 2012

fortifyclient uploadFPR An internal error has occurred

When you try to upload a .fpr file to Fortify 360 server and you get the below mentioned error. Then, this blog provides one of the route cause info and fix.

fortifyclient -url http://some-fortify-server:8282/f360 -authtoken xxxxxxx-xxxx-xxxx-xxxx-xx1231324 uploadFPR -file myproject.fpr -project myproject -version 3.1


An internal error has occurred.
(org.springframework.oxm.jaxb.JaxbUnmarshallingFailureException: JAXB unmarshalling exception: null; nested exception is javax.xml.bind.UnmarshalException
 - with linked exception:

One of the reason this error occurs is "If the date/time on the machine where you run fortifyclient is ahead or too behind"

Solution: Set current date/time on the client machine.

Wednesday, October 10, 2012

Upgrading Virtual Machine hardware versions using Kiwi

In this blog, I'm providing information on upgrading the VMWare Virtual machine hardware version. Also this VM is generated using the SuSe appliance build utility - KIWI.

What hardware versions in VM's mean?
Refer the links 
and

How to determine the current hardware version of a virtual machine ?
In the vSphere Client
  1) Click the virtual machine.
  2) Click the Summary tab.
  3) Find the hardware version in the VM Version field.

Upgrading hardware versions using VMWare utilities
 Refer the above mentioned links for this info.

Ugrading hardware versions in automated builds generated using kiwi (Suse Studio)
KIWI is an application for making a wide variety of image sets for Linux supported hardware platforms as well as virtualisation systems. 
We use Kiwi to build our virtual appliances. Kiwi builds appliances by referring a configuration file config.xml . To upgrade the VM machine HW version, you just need to make an entry to this config.xml file (source/config.xml  not bootsource/config.xml).

Your config.xml file may have a "vmwareconfig" section as shown below (in our case) and that describes the properties of a VM it is going to generate. Add desired HWversion field to this line as shown below (we upgraded from 4 to 7).


<vmwareconfig memory='4096' usb='true' arch='ix86' HWversion='7' guestOS='sles'>
    <vmwaredisk id='0' controller='scsi'/>
    <vmwarecdrom id='0' controller='ide'/>
    <vmwarenic mode='bridged' interface='0' driver='e1000'/>
 </vmwareconfig>

By default it was generating HW version 4 images. After adding HWversion='7', now it generated version 7 images for us. This is the working in our case.

If you don't find "vmwareconfig" section in your config.xml, then you need to deal it under "preferences" section as described in Kiwi cookbook. 

<machine arch="arch" memory="MB" HWversion="number" guestOS="suse|sles" domain="dom0|domU"/>
   <vmconfig-entry>Entry_for_VM_config_file<\vmconfig-entry>
   <vmconfig-entry .../>
   <vmnic driver="name" interface="number" mode="mode"/>
   <vmnic ...>
   <vmdisk controller="ide|scsi" id="number"/>
   <vmdvd controller="ide|scsi" id="number"/>
</machine>

Refer Kiwi cookbook for details.

Wednesday, September 26, 2012

Solution to Project Euler Problem 10 - Find the sum of all the primes below two million

http://projecteuler.net/problem=10

Problem
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.

Answer:   457143142877

Solution:

#!/usr/bin/perl -w
use strict;
use Math::Complex;
my (@primes,$ulimit,$stop);
$ulimit=2000000;
$stop=10001;
generate_prime();

sub generate_prime {
        my ($i,$j,$prime,$squareroot,$count);
        @primes=(2,3,5,7); #initial prime numbers
        my $sum=17;  #initial sum = 2+3+5+7
        #We know only 2 is the even prime numbers, hence skip all even numbers
        for ($i=9;$i<=$ulimit; $i+=2) {
                $prime=0;
                #Divide the number by all the prime numbers less than the square root
                $squareroot=sqrt($i);
                foreach $j (@primes) {
                        if ( $j > $squareroot ) {
                                last;
                        }
                        if (($i%$j) == 0) {
                                $prime=1;
                                last;
                        }
                }
                if ($prime == 0 ) {
                        print "The current prime number is: $i\n";
                        $sum+=$i;
                        print "sum till now: $sum \n";

                }
        }
}

Friday, September 14, 2012

Building SuSe appliances locally using Kiwi (command line)

SuSe Studio is a online/standalone software appliance build tool. We typically use it to generate appliances based on SLES O.S. We can use the online utility for free but the standalone utility is licensed.
But SuSe Studio is just a web interface. The core of the appliance building is done by a command line utility - Kiwi. Kiwi is part of OpenSuse and it is maintained by Open Source community (I guess it's funded by Novell).

Generating appliances through online is good for initial exploration, but if need to generate appliances frequently, can we do this manual task everyday? No. We need to automate. This blog explains how to automate appliance building process using Kiwi tool.

Step 1: Configure your appliance on SuSe Studio Standalone (preferably), build, deploy and validate your appliance.
Step 2: Export your appliance's Kiwi configuration, for building your appliance locally.
          In the build tab of Suse Studio, in the bottom you can find the link to download Kiwi config files.
Step 3: Copy the Kiwi config files to a SLES build machine, where kiwi is installed. Try to use the same kiwi version, the Suse studio have used to build it. Refer README file for details.
Step 4: Compile it using the script create_appliance.sh which comes along with kiwi config files when you export. It has 2 phases
             1) Boot build   - 2 sub process
                                     a) Prepare (generates boot-prepare.log)
                                     b) Create ( generates boot-create.log)
             2) Actual image build - 2 sub process
                                      a) Prepare (generates prepare.log)
                                      b) Create ( generates create.log)

             You can generate the appliance in ISO format or Virtual appliance ( .vmdk , OVF) or some other format.
Step 5: Deploy it either on physical machine or on virtual machine like (VMWare ESX server).
 If it gets deployed without any error, then you are lucky. Otherwise you may get error like
    VFS: Cannot open root device "<NULL>" or unknown-block
    Or some Kernel Panic message.

Actually when we download Kiwi configuration files from Suse Studio, it is not exporting the same config files it used to build. It is providing some modified set of files. Is it fooling us?

Here are the steps to overcome it.
Step 6: Login to the machine where you installed Standalone suse studio. If you don't have stand alone installation and got config files from online suse studio, then you may need to contact Suse Studio for help.
Step 7: Go to /studio/runner/ directory (cd /studio/runner/). You will notice 2 directories of interest bootsource & source.
Step 8: When you start the build on Suse studio, in the build log file you can notice which config files it used for that build. Like
   Running Step 3: Setting up build environment
  '/studio/runner/bootsource/a24-0.0.3-vmx-x86_64/' 'bootsource'
  '/studio/runner/source/a24-0.0.3-vmx-x86_64/' 'source'

   From the above output you can locate where on the Studio machines the actual config files for 'boorsource' and 'source' are getting stored
  Copy these 2 folders to your build machine mentioned in Step 3
Step 9: You may not be able to directly use the config files obtained in Step 8. Instead replace few selected files obtained from config files in step 8, with the existing config files on build machine ( Obtained in Step 2)
   Some of the files which we replaced/added to make it work are
        bootsource/config.xml
        bootsource/root/config.oempartition
        bootsource/root/dump
        bootsource/root/include
        bootsource/root/linuxrc
        bootsource/root/preinit
        bootsource/root/repart
        source/config.xml

 All the files mentioned above may not be needed to replace/add, but this is what worked for us.
Step 10: If you have local repository, don't forget to mention it in config.xml files ( source & bootsource)
Step 11: Build & deploy it again (Step 4 & 5).

Hopefully now your appliance should get deployed without any issues.

Friday, September 7, 2012

How to find source changelist number used to create a perforce branch?

All perforce admins will mostly get this query "Hey from what changelist number branch //branch/B is created from branch //branch/A?"
There is no simple single command from Perforce to answer this question. But still  you can figure it out with below method. Though it is not 100% perfect answer, I can bet it will give 99.99% right one.

Scenario:
 Branch //release/dpm-3.1/KMS/... is created from //dev/rkm/KMS/... a year ago and we didn't preserve the source branch (i.e. //dev/rkm/KMS/...) changelist number used to create new //release branch.
 Usually the standard practice is to record this changelist number in the description of new branch changelist while creating it.
 Anyways, if you have missed it, no worries, still you can figure it out.

Solution:
Step 1: Find the first changelist number which got created when the new branch //release/dpm-3.1/KMS/... took birth. You can use below command to find it.
   $ p4 changes //release/dpm-3.1/KMS/...|tail -1
  Change 1270598 on 2011/11/22 by guruss1@sguru_VWINRSA2-46 'Creating //release/dpm-3.1/KMS '

  Note down this number 1270598

Step 2: Find the ancestral history of new branch //release/dpm-3.1/KMS/... using 'p4 changes -i' command and find out what was the changelist number before the birth of branch (i.e changelist before 1270598). Note that -i option will traverse behind and tell what all parent branch changelists are integrated to this new branch.
You can use below command to find ancestral history. Here I'm limiting history to 3 changelist numbers before this branch took birth.
  $  p4 changes -i //release/dpm-3.1/KMS/...|grep -A3 '1270598'
Change 1270598 on 2011/11/22 by guruss1@sguru_VWINRSA2-46 'Creating //release/dpm-3.1/KMS '
Change 1254271 on 2011/09/20 by sguru@sguru_VWINRSA2-46 'Adding thirdpartylicenses.pdf f'
Change 1254094 on 2011/09/19 by pasuns@Sreeekanth_RKM 'coverage reduced to 70% to make'
Change 1253772 on 2011/09/19 by pasuns@Sreeekanth_RKM 'Fix for KMSRV-1798: Addressing '

From this output, I got to know that, changelist number 1254271, which is from the parent branch  //dev/rkm/KMS/... is the last changelist number integrated and hence this is the changelist used to create this branch. 

Wednesday, September 5, 2012

Tuesday, September 4, 2012

Running perforce p4d as service on RHEL

Perforce is not providing standard script to run the perforce server p4d as service on Linux systems. Different people configured it in different way. Here is my way of running it as service on RHEL machines.
  • Create an account 'perforce' in your machine
  • Download p4d and place it in designated directory as defined P4ROOT. Provide exec permission for it. And also change the owner to 'perforce' account.
         In my case P4ROOT=/home/perforce/server
  • Create a start-up script 'p4d' at /etc/init.d
cat /etc/init.d/p4d
#!/bin/sh
# description: This is a daemon which starts perforce server on reboot
PATH=/sbin:/bin

test -f /home/perforce/server/p4d || exit 0
export P4ROOT=/home/perforce/server
export P4PORT=1818
RUN_AS_USER=perforce

case "$1" in
start)
  su -m $RUN_AS_USER -c "/home/perforce/server/p4d -r $P4ROOT -J /home/perforce/logs/journal -L /home/perforce/logs/p4err -p tcp:$P4PORT &"
  ;;
stop)
  /usr/local/bin/p4 admin stop
  ;;
*)
  echo "Usage: /etc/init.d/p4proxy {start|stop}"
  exit 1
esac

exit 0

Here note that we are not running p4d as root, instead using the account 'perforce'
  • Add a new service for management through chkconfig
          chkconfig --add p4d
  • Configure the run levels on which it needs to be on
          chkconfig --levels 345 p4d on

        It creates soft-links like
           ls -ltr /etc/rc3.d/S29p4d
        lrwxrwxrwx 1 root root 13 Sep  4 16:36 /etc/rc3.d/S29p4d -> ../init.d/p4d

  • Verify it by running commands
            service p4d start
            service p4d stop

Monday, September 3, 2012

commons.ova deploy error on vmware esx - Unsupported hardware family 'virtualbox-2.2'

I downloaded Perforce commons OVA file commons.ova and when I tried to deploy it on VMWare ESX server, it failed with the error
Error: OVF Package is not supported by target: - Line 25: Unsupported hardware family 'virtualbox-2.2'.

Here the issue is with the commons.ovf file inside commons.ova . It defined virtual system type as virtualbox-2.2, which is a Oracle Open source virtualization product. Hence VMWare doesn't understand it.
    <vssd:VirtualSystemType>virtualbox-2.2</vssd:VirtualSystemType>

We need to change it from virtualbox-2.2 to vmx-04 a VMWare format in that file.

Here are the steps to modify .ovf file inside .ova on linux system

  • Copy .ova file (commons.ova) to a Linux system
  • Extract ova file using tar utility
            tar xvf commons.ova

       Now you will get the following files
           commons-disk1.vmdk  commons.ovf
  •  Edit the .ovf file (commons.ovf) and do following changes 
               vi commons.ovf   (change the below line as shown)
            1) Replace  virtualbox-2.2 with  vmx-04
              <vssd:VirtualSystemType>vmx-04</vssd:VirtualSystemType>
            2) To fix the error Line 66: OVF hardware element 'ResourceType' with instance ID '5': No support for the virtual hardware device type '2
  changed this item:

      <Item>
        <rasd:Address>0</rasd:Address>
        <rasd:Caption>sataController0</rasd:Caption>
        <rasd:Description>SATA Controller</rasd:Description>
        <rasd:ElementName>sataController0</rasd:ElementName>
        <rasd:InstanceID>5</rasd:InstanceID>
        <rasd:ResourceSubType>AHCI</rasd:ResourceSubType>
        <rasd:ResourceType>20</rasd:ResourceType>
      </Item>


into this item:

     <Item>
        <rasd:Address>0</rasd:Address>
        <rasd:Caption>SCSIController</rasd:Caption>
        <rasd:Description>SCSI Controller</rasd:Description>
        <rasd:ElementName>SCSIController</rasd:ElementName>
        <rasd:InstanceID>5</rasd:InstanceID>
        <rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>
        <rasd:ResourceType>6</rasd:ResourceType>
      </Item>
           
  • Convert ovf back to ova again. You should have installed 'ovftool' utility on your machine
           ovftool commons.ovf commons.ova

        Deploy it on your ESX server now. There shouldn't be any issue.

Sunday, September 2, 2012

The images and .mar files packed in war using maven are getting corrupted

We faced this strange issue - The images and .mar files packed in war using maven are getting corrupted.
The images in local work-space before invoking 'mvn package' were fine, but the same images packed inside war file by maven were getting corrupted.

The root cause of the issue was 'filterset' task invoked as part 'copy' task from maven-antrun-plugin.
Here is the code from pom.xml which was causing corrupted images
<project>
.
.

 <build>
  <plugins>
    <plugin>
      <artifactId>maven-antrun-plugin</artifactId>
        <executions>
          <execution>
            <id>prepare-webapp</id>
            <phase>process-resources</phase>
            <goals>
              <goal>run</goal>
            </goals>
            <configuration>     
                <tasks>
                  <echo>copying the web-app folder</echo>


                <tstamp>
                  <format property="build.at" pattern="dd MMMM yyyy HH:mm Z" />
                </tstamp>


               <copy todir="${basedir}/target/main/webapp">
                 <fileset dir="${basedir}/../../../src/web/webapp" includes="**/*" />                                    
                    <filterset>
                         <filter token="VERSION" value="${pom.version}" />
                         <filter token="BUILD_DATE" value="${build.at}" />
                     </filterset>
                </copy>
              </tasks>
            </configuration>
           </execution>
          </executions>
         </plugin>
        </plugins>
      </build>
 </project>

Here we incorporated token replacement for all the files under 'webapp' directory. Due to which the 'filterset'  was corrupting binary files like images & .mar files.
Actually we are supposed to perform token replacement only on web.xml files. The fix was to perform it only on these files.
Here is the code used to fix the issue.
<plugin>
   <artifactId>maven-antrun-plugin</artifactId>
     <dependencies>
       <dependency>
<groupId>ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.6.5</version>
</dependency>
     </dependencies>
     <executions>
       <execution>
         <id>prepare-webapp</id>
         <phase>process-resources</phase>
         <goals>
            <goal>run</goal>
         </goals>
         <configuration>
          <tasks>
  <tstamp>
             <format property="build.at" pattern="dd MMMM yyyy HH:mm Z" />
           </tstamp>
  <echo>Token replacement in web.xml</echo>
  <replaceregexp byline="true">
    <regexp pattern="@VERSION@" />
    <substitution expression="${pom.version}" />
    <fileset dir="${basedir}/../../../src/web/webapp" includes="**/web.xml" />
  </replaceregexp>
  <replaceregexp byline="true">
    <regexp pattern="@BUILD_DATE@" />
    <substitution expression="${build.at}" />
    <fileset dir="${basedir}/../../../src/web/webapp" includes="**/web.xml" />
           </replaceregexp>
  <echo>copying the web-app folder</echo>                                
           <copy todir="${basedir}/target/main/webapp">
             <fileset dir="${basedir}/../../../src/web/webapp" includes="**/*" />                                  
           </copy>
          </tasks>
          </configuration>
        </execution>
     <executions>
</plugin>         

Thursday, August 30, 2012

Security vulnerabilities in Nexus Pre-2.1 releases


Sonatype posted security vulnerabilities in Nexus releases prior to 2.1 and recommending upgrade to latest release (i.e. 2.1)
We areusing the nexus version  2.0.6 OSS. Considering this alert, I’m planning to upgrade soon.

Here is the quote from sonatype
Unless you want to risk exposing a secure credential, get hacked via some XML, or suffer a denial of service attack via our Artifactory bridge, you probably want to upgrade to Nexus 2.1 right now.


I hope they are not marketing Sonatype Insight with this alert J

Tuesday, August 7, 2012

How to customize MANIFEST files in WAR using Maven?


A war file contains MANIFEST files which is created by Maven. This post explains how to customize a manifest files to add our own values, which may get reflected while deploying on application servers like websphere.

The post How to create java war (web archive) file using Maven?  explains in detail about building war file for sample application using Maven. Refer it for creating your own simple webapp. This post is the continuation to it.

When you explode the war file created for the sample application code described in my previous post, it contains the below directory structure.
   simple-1.1-SNAPSHOT
       images/springsource.png
       jsp/hello.jsp
       META-INF
           maven/com.rsa.siddesh.simple/simple
              pom.properties
              pom.xml
           MANIFEST.MF
     WEB-INF
          web.xml
          classes
            examples/Hello.class
            images/springsource.png
          lib/servlet-api-2.5.jar
       index.html

The default MANIFEST.MF created by Maven looks like this

MANIFEST.MF
   Manifest-Version: 1.0
   Archiver-Version: Plexus Archiver
   Created-By: Apache Maven
   Built-By: guruss1
   Build-Jdk: 1.6.0_16

We can add many values to it through Maven.
1) 

How to create java war (web archive) file using Maven?

Maven by default creates a JAR package. But we can build package in any other format easily through Maven. This blog explains how to generate a WAR package, customizing the MANIFEST and web.xml files within it, using simple HelloWorld example.

Project structure
<proj-home>
        pom.xml
        src
          main
            java
               App.java
        target
           simple-1.1-SNAPSHOT.jar


First we create a Java file which prints Hello World.
  Maven projects expects Java source files under  src/main/java directory. Hence lets create our Hellow World program App.java under it.

The content of App.java is

Thursday, July 19, 2012

Upgrading Sonatype nexus from 1.5 to 1.6


Here are the steps followed while upgrading Sonatype nexus server from 1.5 to 1.6 version in Linux server (RHEL)

Monday, July 16, 2012

Jira workflows - Dealing with validator

Recently I got a request to make certain fields mandatory while marking a issue as 'Fix' (A step in our workflow'). Initially I made the requested custom fields as 'Required' in Field configuration, but it screwed !!!. After this change it started asking to provide values for fields which are not visible in 'create' page.
Finally I learnt it needs to be configured in Workflows validator.
I'm sharing my learning's here.

Introduction
Workflow is the place where we define business process. A JIRA workflow is the set of steps (or statuses) and transitions that an issue goes through during its lifecycle.
The below picture represents the default Jira workflow.
JIRA workflows consist of steps and transitions:
  A step represents a workflow's current status for an issue.An issue can exist in one step only at any point in time. In the diagram above the rectangular boxes represent steps/statuses.
 A transition is a link between two steps. A transition allows an issue to move from one step to another step. A transition is a one-way link, so if an issue needs to move back and forth between two steps, two transitions need to be created. In the diagram above the arrows represent transitions.

Friday, July 6, 2012

jenkins nested views not displaying with read permissions after upgrade

Today I upgraded Jenkins from 1.444 version to 1.470 and to my surprise it was not showing any nested views for anonymous users. It was showing only the default 'All' view. But if we login to jenkins, then it shows all nested views.
When I googled I found this relevant link https://issues.jenkins-ci.org/browse/JENKINS-13429
As suggested I had already had Jenkins version > 1.459, Nested View Plugin 1.8. But I didn't had Role-based Authorization Strategy. Even I added this plugin to my jenkins.

Then I realized that this issue can fixed without adding Role based Authorization Strategy.
We are already using 'Project-based Matrix Authorization Strategy' and migrating to Role based authorization will be some additional tasks, which I didn't wanted to do.

Anyways the solution is enable 'Anonymous' account 'Read' 'View' in 'Project-based Matrix Authorization Strategy'

Here is the detailed way of doing it

Login to Jenkins -> Manage Jenkins -> Configure System -> Access Control -> Authorization -> Project-based Matrix Authorization Strategy
For Anonymous user -> View section -> Click 'Read'





Wednesday, July 4, 2012

How to run dos commands from cygwin? File name too long - fix

Typically we write shell scripts and run it under cygwin shell. The command used in scripts like rm, mv are obtained from cygwin installation. But recently we encountered a tedious issue with cygwin shell.
The 'rm' command run from cygwin fails with message "File name too long" for some of the files and it's causing build failure.

After a bit of research, I found there is a limit for file name length in cygwin (I guess 256 characters). Upgrading the cygwin may fix the issue, but I didn't want to do that.

Fix: I found that the native dos 'rmdir' command can delete these lengthy file names without any issue and hence I decided to call native dos commands from Shell scripts.

Here is a way to call native dos commands from cygwin shell
  cmd /C rmdir /S /Q <directory>


here
   cmd   -  Starts a new instance of the Windows command interpreter
       /C   - Carries out the command specified by string and then terminates

  rmdir - dos command to delete directory
     /S - recursive delete of files & sub-directories
     /Q  - Quiet


Just add this command to shell script. It may need some if condition to run this only on Windows platform.


Tuesday, June 26, 2012

rsync - Syncing soft-link's as soft-link

We were running rsync between 2 repositories (created on Linux OS) from many days. But we missed to handle soft-links properly. The option 'l' provided by rsync will update the soft-links in the destination server accordingly. If this option is not provided, in the destination, the directory pointed by source will be created with the soft-link name and it will be never updated at all !


Here is a usage of rsync
/usr/bin/rsync -avuzl --stats /export/kits/dpm/builds/dev/rkm root@us-repo.org:/export/kits/dpm/builds/dev


Where the options
 -a, --archive               archive mode;
 -v, --verbose               increase verbosity
 -u, --update                skip files that are newer on the receiver
 -z, --compress              compress file data during the transfer
 -l, --links                 copy symlinks as symlinks

Friday, June 8, 2012

Deploying maven build artifacts to Nexus repository

To deploy the artifacts generated by your maven build to nexus repository, you need configure it in 2 files
1) Parent pom.xml
2) settings.xml

and also to prevent publishing password in settings.xml file, you need to configure password less ssh connection between your build machine and nexus repository


Pom.xml changes
Add distributionManagement section to your parent pom.xml and provide ID, name, url for various delivery types like snapshots, releases, site, etc.
Here is an example from our project

Friday, May 25, 2012

Jira Administration - Custom Fields & Screens

Jira has a complex administration concepts. Performing Jira admin tasks without understanding the complete flow leads to mistakes. In this post, I'm explaining how the Custom Fields are created and how to link it to project screens.

Creating custom fields
Let's assume you get a request to create a custom field "Severity". Check whether this custom field is already existing by

  • Login to Jira -> Administration -> Issue Fields -> Custom Fields
If it is not available, create it straight away

    • Login to Jira -> Administration -> Issue Fields -> Custom Fields -> Add custom Field -> Choose the field type -> Provide field name, search template, etc , applicable issue types, Applicable context.
   Custom Field Context
        The custom field which we created can be associated to all issues by selecting "Global context" or to specific projects. Depending on the usability of the custom field make it either global or project specific.

If the custom field is already exist, then try to reuse it.

Screens
Screens group multiple issue fields. Using Screens, you can control which fields are displayed. You can also split fields on a Screen into multiple tabs.

Monday, May 21, 2012

Issue: Nexus is too slow in downloading pom files

Recently our nexus open source [1.5] version became too slow in downloading .pom files. It worked well for years, but it starting taking more than 2 mins to download a .pom file of size 422 Bytes. But it used download few files faster. Manually we could have downloaded the same file either using browser, wget or curl in fractions of seconds. But artifact download through maven got affected badly.

We new the changes happened in our network. Our nexus host machine got moved to a restricted network, where access to internet was blocked. Finally we nailed down the issue is either maven/nexus is trying to contact external public repositories for updates for each download and slowing down things.


Issue resolution: Re-order group repositories.

By default with nexus installation, we had “Maven central” in top of public repositories.
Since Maven central is blocked in our network, nexus got slowed down.
Now I put our own hosted repositories on top of the “Ordered Group repositories”.
Now every search for artifact is getting matched in our local repository, instead of going external network and hence download got faster now.


Login to Nexus -> Public Repositories -> 





Tuesday, May 15, 2012

ssh command run from Jenkins shell will take “/” as home directory

Issue:
ssh command run from Jenkins shell will take “/” as home directory and hence it tries to look for all the files in that directory like “/.ssh/known_hosts” & /.ssh/id_rsa. It typically happens in Windows cygwin shell

Environment:
 Jenkins 
 Windows Slave
 Cygwin
 ssh


Error:    
debug3: check_host_in_hostfile: filename /.ssh/known_hosts
debug3: check_host_in_hostfile: filename /etc/ssh_known_hosts
debug3: check_host_in_hostfile: filename /.ssh/known_hosts
debug3: check_host_in_hostfile: filename /etc/ssh_known_hosts
debug3: check_host_in_hostfile: filename /.ssh/known_hosts
debug3: check_host_in_hostfile: filename /etc/ssh_known_hosts
debug2: no key of type 0 for host library-blr.ap.rsa.net
debug3: check_host_in_hostfile: filename /.ssh/known_hosts2
debug3: check_host_in_hostfile: filename /etc/ssh_known_hosts2
debug3: check_host_in_hostfile: filename /.ssh/known_hosts
debug3: check_host_in_hostfile: filename /etc/ssh_known_hosts
debug2: no key of type 2 for host library-blr.ap.rsa.net
Build was aborted

Fix:

1) Run Jenkins service as “Administrator”. By default it will be running  as "Local user"
              Start -> run -> services.msc -> Jenkins Slave -> Right click -> Properties -> Log on -> This account ->
           Give Administrator & password.
       
       Then in  jenkins -> manage nodes -> Select your machine -> Disconnect -> Then reconnect.
       Make sure the slave is launched as "Windows service"

        The above step should fix the issue.

     Note: Step 2 is optional. 

2) Run Cygwin-sshd as “Administrator.  By default it will be running  as “cyg_server”

Jenkins upgrade > 1.444
If you upgrade jenkins to latest versions, again ssh commands through windows will hang for the same reason mentioned above.
The fix is not to run Jenkins service as 'Local user'. Now you can configure it through jenkins node configuration page.
Login to jenkins -> Manage Jenkins -> Manage Nodes -> Click on your node -> configure -> Run service as -> select 'Log on using a different account' ->
For ex: 
User name:   .\Administrator  
pasword:  ********

Note:  .\  in user name is must , otherwise it fails with 
ERROR: Failed to create a service: Status Invalid Service Account




Thats it !!