Friday, November 27, 2009

Creating iso image using mkisofs

An ISO image is an archive file (disk image) of an optical disc using a conventional ISO (International Organization for Standardization) format. ISO image files typically have a file extension of .ISO. The name "ISO" is taken from the ISO 9660 file system used with CD-ROM media, but an ISO image can also contain UDF file system because UDF is backward-compatible to ISO 9660.

mkisofs is effectively a pre-mastering program to generate the iso9660 filesystem - it takes a snapshot of a given directory tree, and generates a binary image which will correspond to an iso9660 filesystem when written to a block device.

Here is an example of mkisofs usage ..

mkisofs -r -J -l -d -allow-multidot -allow-leading-dots -hide-rr-moved -disable-deep-relocation -V "EPM-1.1-LINUX-x86_64" -o epm-linux-x86_64.iso Disk1/

where
-r, -rational-rock Generate rationalized Rock Ridge directory information
-J, -joliet Generate Joliet directory information
-l, -full-iso9660-filenames Allow full 31 character filenames for ISO9660 names
-d, -omit-period Omit trailing periods from filenames (violates ISO9660)
-allow-multidot Allow more than one dot in filenames (e.g. .tar.gz) (violates ISO9660)
-allow-leading-dots Allow ISO9660 filenames to start with '.' (violates ISO9660)
-hide-rr-moved Rename RR_MOVED to .rr_moved in Rock Ridge tree
-D, -disable-deep-relocation
Disable deep directory relocation (violates ISO9660)
-V ID, -volid ID Set Volume ID
-o FILE, -output FILE Set output file name


How to mount an ISO image under linux?
mount -o loop -t iso9660 iso-name.iso /mountpoint

Tuesday, November 24, 2009

0509-036 Cannot load program ssh because of the following errors: Dependent module /usr/local/ssl/lib/libcrypto.a(libcrypto.so) could not be loaded.

I installed openssh without any problem, but when I invoked either ssh/sshd, it started giving error ..

> ssh
exec(): 0509-036 Cannot load program ssh because of the following errors:
0509-150 Dependent module /usr/local/ssl/lib/libcrypto.a(libcrypto.so) could not be loaded.
0509-152 Member libcrypto.so is not found in archive

Initially I thought it is openssl version mismatch and re-installed forcefully openssh from rpm, but it didn't solve this problem.

Later I thought of analyzing library /usr/local/ssl/lib/libcrypto.a
Step 1) Copy the libcrypto.a to a temporary directory
Step 2) Extract files in the archive libcrypto.a
ar -xv ./libcrypto.a
Step 3) ls
libcrypto.a libcrypto.so.0 libcrypto.so.0.9.7
I found here that there is no shared library named libcrypto.so. So I thought of creating one
Step 4) cp libcrypto.so.0 libcrypto.so
Step 5) ls
libcrypto.a libcrypto.so libcrypto.so.0 libcrypto.so.0.9.7
Step 6) Append the newly created library libcrypto.so to archive libcrypto.a
ar -qv ./libcrypto.a libcrypto.so
Step 7) copy the newly created archive to desired location
cp ./libcrypto.a /usr/local/ssl/lib/libcrypto.a

Thats it .... my problem got fixed
> ssh
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
[-D [bind_address:]port] [-e escape_char] [-F configfile]
[-i identity_file] [-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-R [bind_address:]port:host:hostport] [-S ctl_path]
[-w local_tun[:remote_tun]] [user@]hostname [command]

Thursday, November 5, 2009

Tweak single instance of tinderbox to work on two different perforce servers

Tinderbox is a tool developed by Mozilla, which collects build logs and presents the logs and the result it in a nice, clear and concise way.

But one instance of tinderbox works with one SCM system. In our organization, we have 2 different perforce servers, but we don't want to have two different tinderbox installations for 2 different perforce servers. To make one instance of tinderbox to work on two different perforce servers, following tweak needs to be done Tinderbox code.

File 1: tb_code/lib/TinderDB/VC_Perforce.pm
Add following lines

936a937,942
> if ($filespec =~ /RKM/) { #Here RKM is the depot name, whose p4 server is different
> $ENV{'P4PORT'} = $TinderConfig::RKM_PERFORCE_PORT;
> $ENV{'P4USER'} = $TinderConfig::RKM_PERFORCE_USER;
> $ENV{'P4PASSWD'} = $TinderConfig::RKM_PERFORCE_PASSWD;
> }
>


File 2: tb_code/local_conf/TinderConfig.pm
Add the following lines
62a63,66
> #RKM Perforce variables
> $RKM_PERFORCE_PORT="perforce:1777";
> $RKM_PERFORCE_USER="p4id";
> $RKM_PERFORCE_PASSWD="xxxxxx";

Basically we are defining different set of perforce variables in File2 and making File 1 to use it matching a depot in the alternative perforce server

Wednesday, November 4, 2009

AIX: make: 1254-055 Dependency line needs colon or double

We get this compilation issue "make: 1254-055 Dependency line needs colon or double" while compiling C/C++ code in AIX machines.

It occurs if Makefile written in gmake format and then using native AIX make to compile it.

It can be resolved by installing gmake on AIX machine and then using it to compile.

We can verify whether the installed make is gmake or not by running 'make -v' to find out. If it's GNU, you'll know based on the output. If it gives you an error, then it's not GNU make.

-bash-3.00$ /usr/local/bin/make -v
GNU Make 3.80
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

This is GNU Make



-bash-3.00$ make -v
make: Not a recognized flag: v
usage: make [-eiknqrst] [-k|-S] [-d[A|adg[1|2]msv]] [-D variable] [-f makefile ] [-j [jobs]] [variable=value ...] [target ...]

This is not GNU make.

Monday, November 2, 2009

Posting a JIRA bug using Perl Mechanize

Perl provides modules which can be used as command line browser to automate tasks dependent on web pages. Among them LWP and mechanize are important ones. Mechanize is latest module with more features compared to LWP.
Recently I wrote a perl script to integrate a perl tool with JIRA bug tracking tool using mechanize, I just want to document here about mechanize usage with JIRA.
Basically this perl script post a bug in Jira after authentication

#!/usr/bin/perl -w
use WWW::Mechanize;
use HTTP::Cookies;
$mech = WWW::Mechanize->new();

# Authenticate to Jira and get a cookie back for the subsequent post.
$root_uri = "http://your-jira-site.com";

$mech->cookie_jar(HTTP::Cookies->new()); # Don't write cookies to file!
$mech->get($root_uri);
#login to Jira
$mech->form_name('loginform');
$mech->field(os_username => $jira_id);
$mech->field(os_password => $jira_pass);
$mech->click();
my $response = $mech->content();
if ($response !~ m/Dashboard for (\w+) (\w+)/) {
print_error("Failed to add new bug: authentication failed. Below you might find a clue as to what happened.");
print_error("
");
print_error($response);
return;
} else {
$username="$1 $2";
}



print "

creating new Jira bug ...

\n";
my $show_uri = "$root_uri/browse";
# Go to Product page in Jira
$mech->follow_link(text => "$product", n => 1);
#Browse to create new issue form
$mech->follow_link(text => "Create a new issue in project $product", n => 1);
$mech->form_name('jiraform');
$mech->click();

#Create a new bug
$mech->form_name('jiraform');
$mech->field(summary => "$formdata{hotfix}: $formdata{bugtitle}");
$mech->field(components => "$components_map{\"$formdata{component}\"}");
$mech->field(customfield_10044 => "$formdata{platform}"); #OS/Platform
$mech->field(customfield_10054 => "moderate"); #Bug severity
$mech->field(assignee => "$jira_id");
$mech->field(description => "$comment");
$mech->field(customfield_10067 => "$_[0]"); #Found in Version
$mech->field(customfield_10007 => "All"); #Appserver
$mech->field(customfield_10060 => "Support request (CE_Assistance)"); #Type of defect
$mech->field(customfield_10020 => "CS - other"); #Discovered by function
$mech->field(customfield_10019 => "Use in production "); #Discovered by activity
$mech->click();
print "

  posting bug ...\n";
$response = $mech->content();
my $bz_msg;
my $bug_number;
if ($response =~ m/Key:.*?browse\/(\w+)-(\w+)/s) {
$bug_number = "$1-$2";
print "done

\n";
$bz_msg = "

Bug #$bug_number for version $_[0] has been posted to " . "Jira.

\n";
print "$bz_msg";
$bz_donemsg .= $bz_msg;
} else {
$bz_donemsg .= "

No Jira bug was filed for version $_[0]. This will need to be done manually.

\n";
print_error("Failed to add new bug (Jira output follows):\n$response");
}



Reference: http://www.ibm.com/developerworks/linux/library/wa-perlsecure.html

Friday, October 30, 2009

AIX install/unintall software - ssh

In AIX, we can use smitty to install software.
To uninstall, we can use "smit remove"

I wanted to install ssh on a AIX machine. But some how smit failed, so I tried compiling openssh from source.
We can download the openssh source code and compile it on our AIX box. Openssh needs zlib and openssl needs to be installed prior to it (I compiled both of these).
It looks for zlib.h zconf.h file /usr/inlcude directory and zlib.a file in /usr/lib directory.

After that we need to add new service on the system. Refer http://blog.thilelli.net/post/2005/06/14/How-to-Add-a-New-sshd_adm-Service-on-AIX-5X for this.

How to start sshd?
"startsrc -s sshd" will start the daemon.
Use "lssrc -s sshd" to see if it's already running.

Another way to start is "/usr/sbin/sshd -de"

lslpp --> The lslpp command displays information about installed filesets or fileset updates.
Example
[root@re-aix02:/usr/local/etc] lslpp -l bos.rte.libc openssh.base.server
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
bos.rte.libc 5.1.0.69 COMMITTED libc Library
openssh.base.server 4.1.0.1 COMMITTED Open Secure Shell Server


oslevel -> Reports the latest installed maintenance level of the system


Here is some more info about installing ssh using smit

Installing openSSH on 5.1, 5.2, and 5.3
At 5.1, 5.2, and 5.3, the installation of openssh itself is in installp format, but all the prerequisites (including openssl) can be installed using the same rpm -i commands (using the same 4.3.3. rpm packages). The installp format package can be downloaded from the following site:http://sourceforge.net/projects/openssh-aix After installing the prerequisites using the following commands,
1. rpm -i zlib-1.1.4-3.aix4.3.ppc.rpm
2. rpm -i prngd-0.9.23-3.aix4.3.ppc.rpm AIX 5.2 uses /dev/urandom
3. rpm -i openssl-0.9.7d-1.aix5.1.ppc.rpm
4. rpm -i openssl-devel-0.9.7d-1.aix5.1.ppc.rpm

use smitty installp to install the openssh filesets extracted from the tar file openssh-3.8.1p1_51.tar (for 5.1), openssh-3.8.1p1_52.tar (for 5.2), and openssh-3.8.1p1_53.tar (for 5.3). The following steps need to be followed to install openssh.


1. In the directory where the images are, run the command inutoc.
2. Run smitty install.
3. Select "Install and Update Software".
4. While in smitty do the following:
4.1 Select "Install Software".
4.2 Enter a dot (".") in the field for "INPUT device / directory for software" and press ENTER.
4.3 Enter openssh in the "SOFTWARE to install" field.
4.4 Scroll down to "Preview new LICENSE agreements?" and press tab key to change the field to yes. Read the license agreement.
4.5 Scroll down to "ACCEPT new license agreements?" and press tab to change the field to yes. Press ENTER to begin the software installation.

Where to get openssl?
The OpenSSH software is shipped on the AIX 5.3 Expansion Pack. This version of OpenSSH is compiled and packaged as installp packages using the openssh-3.8.p1 level of source code. The installp packages include the man pages and the translated message filesets. The OpenSSH program contained in the Expansion Pack CD-ROM media is licensed under the terms and conditions of the IBM® International Program License Agreement (IPLA) for Non-Warranted Programs.

Before installing the OpenSSH installp format packages, you must install the Open Secure Sockets Layer (OpenSSL) software that contains the encrypted library. OpenSSL is available in RPM packages on the AIX Toolbox for Linux® Applications CD, or you can also download the packages from the following AIX Toolbox for Linux Applications Web site:

http://www-1.ibm.com/servers/aix/products/aixos/linux/download.html

Because the OpenSSL package contains cryptographic content, you must register on the Web site to download the packages. You can download the packages by completing the following steps:

1. Click the AIX Toolbox Cryptographic Content link on the right side of the AIX Toolbox for Linux Applications Web site.
2. Click I have not registered before.
3. Fill in the required fields in the form.
4. Read the license and then click Accept License. The browser automatically redirects to the download page.
5. Scroll down the list of cryptographic content packages until you see openssl-0.9.6m-1.aix4.3.ppc.rpm under OpenSSL — SSL Cryptographic Libraries.
6. Click the Download Now! button for the openssl-0.9.6m-1.aix4.3.ppc.rpm.

After you download the OpenSSL package, you can install OpenSSL and OpenSSH.

Thursday, October 15, 2009

mount: server:path failed, reason given by server: Permission denied

We get this peculiar NFS moount problem for quite a few reasons. But recently we encountered this problem and scratched our head debugging it. We tried restarting nfs server (service nfs restart), exportfs, validating /etc/export of server. Nothing solved the issue

engweb:/mnt/kits# mount -a
mount: library-hq:/export/kits/ failed, reason given by server: Permission denied
mount: library-hq:/export2/kits/ failed, reason given by server: Permission denied


But the actual issue is root partition (/) was full in the machine which hosted nfs server. After clearing the disk space everything got resolved.

Wednesday, October 14, 2009

How to access CD from LINUX installed in VMWare Virtual machine?

In VMware workstation click on VM -> Settings -> CD-ROM.
Make sure check boxes "Connected" and "Connect at power on" are ticked.
Then restart the Linux machine availble in VM.

After this you need to mount CDROM to one of your local location.
Find out in your machine where CDROM device is available.
>>> dmesg|grep -i cdrom
hdc: Vmware Virtual IDE CDROM Drive, ATAPI CD/DVD-ROM drive

From above output, in my machine it is available in /dev/hdc.

Now mount it to a local location "/mnt/cdrom". Make sure this directory is already created.

>>> mount /dev/hdc /mnt/cdrom
mount: block device /dev/hdc is write-protected, mounting read-only

Thats it. Now you can access the CD contents from /mnt/cdrom.

Saturday, September 26, 2009

List of useful mysql commands

Lets start with mysql database administration commands.
1) /bin/mysql -h hostname -u root -p #To login
2) SHOW DATABASES; # List all databases
3) DROP TABLE [table name]; #To delete a table
4) To create a new user - Login as root, switch to mysql db, make the user, update privileges
mysql) USE mysql;
mysql) INSERT INTO USER (Host, User, Password) VALUES ('%', 'username', PASSWORD('Password'));
mysql) flush privileges;
5)To change a password
/bin/mysqladmin -u username -h hostname -p password 'new-password'
or
mysql> SET PASSWORD FOR 'User'@'hostname'=PASSWORD('Passwd');
mysql> flush privileges;
6) To list available users - Login as root;
Use mysql;
SELECT USER FROM user;
SELECT USER,PASSWORD FROM user; #To know whether password is set fot users

Here are some more use full commands

1) CREATE DATABASE 134a;
2) DROP DATABASE 134a;
3) USE 134a;
4) CREATE TABLE president (
last_name VARCHAR(15) NOT NULL,
first_name VARCHAR(15) NOT NULL,
state VARCHAR(2) NOT NULL,
city VARCHAR(20) NOT NULL,
birth DATE NOT NULL DEFAULT '0000-00-00',
death DATE NULL
);
5) SHOW TABLES; #list tables
6) DESCRIBE president; #to view structure of table
7) INSERT INTO president VALUES ('Washington','Bush','George', 'VA', 'New York', '19320212', '19991214');
8) SELECT * FROM president;
9) SELECT * FROM president WHERE state="VA"; #selecting rows by using WHERE clause
10) SELECT state,first_name,last_name FROM president; #selecting specific columns
11) DELETE FROM president WHERE first_name="George"; # Deleting selected row
12) UPDATE president SET state="CA" WHERE first_name="George"; #Modify entries
13) LOAD DATA LOCAL INFILE 'president_db' INTO TABLE president; #Loading your data from a file into a table. Also try "mysql -u USERNAME -p < my_mysql_file
14) SELECT * FROM president WHERE death=NULL; # or WHERE death IS NULL #list president who are alive
15) SELECT last_name, birth FROM president WHERE birth < '1800-09-01';
16) SELECT last_name, birth FROM president ORDER BY birth ASC LIMIT 1; #select president who was born first
17) SELECT state, count(*) AS times FROM president GROUP BY state ORDER BY times DESC LIMIT 5; # Names of first 5 states in which the greatest number of presidents have been born
18) SELECT * FROM president WHERE (YEAR (now())-YEAR(birth)) < 60; #President who have been born in last 60 years
19) SELECT last_name, birth, death, FLOOR ((TO_DAYS(death)-TO_DAYS(birth))/365) AS age FROM president WHERE death IS NOT NULL ORDER BY age DESC LIMIT 10; #President who have died by their age in descending order
20) SELECT last_name, address, test_date, score FROM test, student WHERE test.ssn = student.ssn;

You can refer mysql.com/documentation/index.html and
mysql.com/documentation/bychapter/manual_Introduction.html for more details

Wednesday, September 16, 2009

How to redirect stdout & stderr to a file using tee?

ant -f build-release.xml dist 2>&1 |tee /tmp/tt

Sunday, August 23, 2009

Installing custom modules in Drupal

1) Download the module from drupal website which you are interested and place it in apps\drupal\htdocs\sites\all\modules.
For Ex: I wanted to install Pathauto module, I downloaded it from http://drupal.org/project/pathauto and placed it in C:\Program Files\BitNami Drupal 6 Stack\apps\drupal\htdocs\sites\all\modules.

2) Extract it and read README.txt and INSTALL.txt files once.

3) Enable the module in the administration tools in drupal(i.e Administer -> Modules -> tick the downloaded module -> Click save Configuration)

Thursday, July 9, 2009

How to use an external variable inside awk command?

Some times we need to pass external shell variables inside awk statement. Here is a way to do it

suppose I want to use $user in my awk command
$ export user=uid #$user is defined

Pass it to awk command with -v option

$ cat /etc/passwd|tail -1|awk -v var=$user -F":" '{print "Your " var " is " $1}'

o/p: Your uid is guruss1

Monday, June 29, 2009

Dell laptop support

Dell provides very good service through it's support site http://support.dell.com/.
My laptop video driver got screwed up, for that just I need to visit this support site, click on "Drivers & downloads", then provide your laptop service tag, which lists already installed drivers, from there we can download required drivers.

Premature end of script headers

This error comes when we execute a Perl/Php/python script from apache http webserver. This is a common error which get reported for many problems.
The actual error reported in apache log file is

Use of uninitialized value in concatenation (.) or string at HAT.pm line 887.
Use of uninitialized value in string eq at HAT.pm line 888.
[Mon Jun 29 04:07:36 2009] [error] (2)No such file or directory: exec of /var/www/main/CEandPorting/internal/hotfix/automation/cgi-bin/hat_branches.cgi failed
[Mon Jun 29 04:07:36 2009] [error] [client 128.222.51.107] Premature end of script headers: /var/www/main/CEandPorting/internal/hotfix/automation/cgi-bin/hat_branches.cgi

This error got resolved for me after changing the cgi script from Windows format to Unix format (using notepad++)

Tuesday, June 23, 2009

CVS commands for novices

1) Building your repository
export CVSROOT=/home/siddesh/cvs-repo
mkdir -p $CVSROOT
cvs init or (cvs -d $CVSROOT init)

# Above command is to set up your chosen directory as a CVS repository. Setting up the repository produces a place to store projects and also adds the special CVSROOT directory. The CVSROOT directory contains configuration and metadata files. Projects are stored in subdirectories of the repository root directory, which is /home/siddesh/cvs-repoin our example.

2) Importing Projects
#Create your initial project directory structure, possibly in /tmp. Once the project is stored in CVS, this initial version can be removed.

mkdir ~/cvs-home #Create a temporary project directory
create ~/cvs-home/lilprogram/source.c #create temp file

#Once you have your initial structure, add any initial files you want. Change into the root directory of the project. Then, from within that directory, import the project with the command:

cd ~/cvs_home/lilprogram
cvs import -m "Test" lilprogram loci lilprogram_0_0
(or cvs -d repository_path import name_of_project vendor_tag release_tag)

Example 2: Importing a project

/tmp$ mkdir example
/tmp$ touch example/file1
/tmp$ touch example/file2
/tmp$ cd example
/tmp/example$ cvs -d /var/lib/cvsroot import example example_project ver_0-1

#In the repository, the imported project is stored as a collection of RCS files.
ls -l /home/siddesh/cvs_repo/
total 8
drwxrwxr-x 3 siddesh siddesh 4096 Feb 9 18:55 CVSROOT
drwxrwxr-x 3 siddesh siddesh 4096 Feb 9 19:03 lilprogram

ls -l /home/siddesh/cvs_repo/lilprogram/
total 8
drwxrwxr-x 2 siddesh siddesh 4096 Feb 9 18:53 Attic
-r--r--r-- 1 siddesh siddesh 1060 Feb 9 18:49 source.c,v

3) Accessing Remote Repositories
There are several ways to access a remote repository. One among them is ext method with the SSH protocol. The ext and SSH approach uses the ext repository access method, with an SSH client as the program that performs the connection.

Your first step,is to install SSH on the client machine. Make sure that the client-end SSH protocol matches the server's SSH protocol. Set up SSH keys or passwords and test the connection. Using SSH enables you to create a secure connection to the remote repository.

Next, if you're on a Unix or Linux system, set the CVS_RSH environment variable on your client machine to the name of your SSH program, usually ssh or ssh2.

On a remote machine, the repository path takes the form:
[:method:][[[user][:password]@]hostname[:[port]]]/path

Ex: :ext:cvs:/home/cvs
where "ext" is the method and "cvs" is the hostname.

4) Checkout
#CVS stores projects and files in a central repository, but you work from a working copy, called a sandbox , in your local directories. You create that sandbox with cvs checkout.

mkdir ~/cvs_home/client1
cd !$
cvs checkout lilprogram or (cvs -d repository_path checkout project_name)

Example 2:
$mkdir ~/cvs
$cd ~/cvs
$cvs -d /var/lib/cvsroot checkout example
cvs checkout: Updating example
U example/file1
U example/file2

#The checkout command puts a copy of the project's files and subdirectories into a directory named for the project, created in the current working directory. It also puts some administrative files of its own in a subdirectory of the project directory, called CVS.

#You can check out an individual file or subdirectory of a project by replacing project_name with the pathname to the file or directory, from the project's root directory. CVS stores the repository path as part of the sandbox, so you should never again need to use -d repository_path in commands executed within that sandbox.

Example 3: Remote repository checkout
cvs -d :ext:cvs:/home/cvs checkout cvsbook


5) Commit & Update
#Once you've checked out project files into a sandbox, you can edit those files with your preferred editor. Changes are not synchronized with the repository until you run the cvs commit command. This command is best run from the root directory of your sandbox, and it must be run from within the sandbox.

vi source.c -> make changes
cvs update
cvs commit source.c (give description)

#If a revision in the repository is more recent than the revision the sandbox was based on, cvs commit fails. Use the cvs update command to merge the changed files; then run cvs commit again.
Ex: cvs update -d
#As a command option to the update command, -d downloads new directories and files.

#As the update command runs, it generates a list of files that are modified. To the immediate left of each filename, you will see a single uppercase letter. Those letters report the status of each file listed, and they have the following meanings:

U filename
Updated successfully. A newer version in the repository has replaced your sandbox version.
A filename
Marked for addition but not yet added to the repository (need to run a cvs commit).
R filename
Marked for removal but not yet removed from the repository (need to run a cvs commit).
M filename
Modified in your working directory. The file in the sandbox is more recent than the repository version or the sandbox and the repository both had changes that the system could safely merge into your sandbox copy (need to run a cvs commit).
C filename
There was a conflict between the repository copy and your copy. The conflict requires human intervention.
? filename
The file is in your working directory but not in the repository. CVS doesn't know what to do with it.

The A, R, and M codes mean that your sandbox contains changes that are not in the repository and it would be a good idea to run a cvs commit.

If CVS can't merge a modified file successfully with the copy in the repository, it announces the conflict in the output of cvs update.

CVS automatically merges files when the changes are on different lines. If a line in the repository copy is different from the corresponding line in the sandbox copy, CVS reports a conflict and creates a file with the two revisions of the line surrounded by special marks, as shown below

<<<<<<This line came from the sandbox.
= = = = = = =
This line came from the repository..
>>>>>>> 1.4

The contents of the original file are stored in .#file.revision in the file's working directory, and the results of the merge are stored as the original filename. Search for these marks in the file with the original name, edit the file, then commit the changed file to the repository.

6) Adding Files
To add a file to a project in the repository, first create the file in your sandbox. Be sure to consider your project's structure and place the file in the correct directory. Then, issue the following command from the sandbox directory containing the file:
cvs add filename

cvs add header.h; cvs commit

This command marks the new file for inclusion in the repository. Directories are added with the same command. Files within a directory can't be added until the directory itself is added. A file is only marked for addition when you run cvs add ; it is actually added to the repository when the next cvs commit is run. A directory is added to the repository immediately.

7) Removing Files
To remove a file from the repository, first remove the file from the sandbox directory; then run the following command from the sandbox directory that contained the file:
cvs remove filename

rm header.h; cvs remove header.h; cvs commit

The deletion does not take effect until the next cvs commit command is run; the file remains in the repository until then.

After thecvs commit is run, CVS doesn't remove the file entirely; it puts it in a special subdirectory in the repository called Attic . This saves the file history and enables the file to be returned to the repository later.

Use the -P flag to cvs checkout and cvs update to avoid empty directories in your sandbox.

8)Branching
cvs rtag -b lilprogram_0_1 lilprogram
cvs update -r lilprogram_0_1
cvs status source.c

9)Reverting to main branch
cvs update -A
cvs status source,c

10) Tag as label
edit source.c; cvs commit
cvs tag liltagl_01_0
cvs update -r lilprogram_0_1; cat source.c

11) Merging branches
edit source.c(in -0-1 branch); cvs commit source.c
cvs update -A; cvs update -j lilprogram_0_1; cvs commit source.c

14) export
cvs export -r lilprogram_0_1 lilprogram

15) annotate
cvs annotate source.c

Friday, June 19, 2009

How to configure apache2.2 .x and php5.x?

As a newbie to PHP, I was trying to run a simple php file which I wrote on web browser. But it was displaying the php script contents instead of executing it. Then I found the need to configure php interpreter to Apache. Here is the link which explains very well about configuring apache & php
http://www.thesitewizard.com/php/install-php-5-apache-windows.shtml

In simple, we need to copy php.ini-recommended file as php.ini inside php directory and after that modify httpd.conf file as described in link and then restart Apache.

Monday, June 1, 2009

How to set a login timeout for a perforce user?

Perforce supports two methods of authentication: password-based and ticket-based.

How password-based authentication works?
Password-based authentication is stateless; once a password is correctly set, access is granted for indefinite time periods.

How ticket-based authentication works
Ticket-based authentication is based on time-limited tickets that enable users to connect to Perforce servers. Tickets are stored in the file specified by the P4TICKETS environment variable. If this variable is not set, tickets are stored in %USERPROFILE%\p4tickets.txt on Windows, and in $HOME/.p4tickets on UNIX and other operating systems. Tickets are managed automatically by 2004.2 and later Perforce client programs.

Tickets have a finite lifespan, after which they cease to be valid. By default, tickets are valid for 12 hours (43200 seconds). To set different ticket lifespans for groups of users, edit the Timeout: field in the p4 group form for each group. The timeout value for a user in multiple groups is the largest timeout value (including unlimited, but ignoring unset) for all groups of which a user is a member. To create a ticket that does not expire, set the Timeout: field to unlimited.

Although tickets are not passwords, Perforce servers accept valid tickets wherever users can specify Perforce passwords. This behavior provides the security advantages of ticket-based authentication with the ease of scripting afforded by password authentication.

Logging in to Perforce
To use ticket-based authentication, get a ticket by logging in with the p4 login command:

p4 login

You are prompted for your password, and a ticket is created for you in your ticket file. You can extend your ticket's lifespan by calling p4 login while already logged in. If you run p4 login while logged in, your ticket's lifespan is extended by 1/3 of its initial timeout setting, subject to a maximum of your initial timeout setting.

By default, Perforce tickets are valid for your IP address only. If you have a shared home directory that is used on more than one machine, you can log in to Perforce from both machines by using the command:

p4 login -a

to create a ticket in your home directory that is valid from all IP addresses.


Determining ticket status
To see if your current ticket (that is, for your IP address, user name, and P4PORT setting) is still valid, use the command:

p4 login -s

If your ticket is valid, the length of time for which it will remain valid is displayed.

To display all tickets you currently have, use the command:

p4 tickets

The contents of your ticket file are displayed.


Refer P4 Sys admin Guide for details

Wednesday, May 20, 2009

Perforce Remote depot creation

The perforce remote depot allows to access files in depot created in another server instance. For example if you are connected to a server listening to port 1666 and want to access contents of a depot created in server listening at port 1667, remote depot assists in doing this. You can sync, diff and integrate remote depot with local depot.

Example:
Server1=1666 : Branch-name=//user/sguru/...
Server2=1888 : Remote-depot=//remote-sguru/... (this is the new depot which will be created)
I need to make //user/sguru/... in server1 as remote depot in server2

Step1: Create a new depot on Server2 and make following changes in depot spec
Depot: remote-sguru
Type: remote
Address: server1:1666
Map: //users/sguru/...

Step2: Open protection table on Server1 and add the following line
read user remote //users/sguru/…

Step3: Grant read access to //remote-sguru/... depot only to your build or integration managers. Open protection table on Server2 and add the following
list user * * -//remote-sguru/...
read user p4admin * //remote-sguru/...
We are avoiding rest of users other than admin accessing directly remote depot (server1) through //remote-sguru/... to minimize network traffic. Instead we create another branch as described in step4 and allow other users to access remote depot code from it.

Step4: Integrate //remote-sguru/... to location and allow rest of the users to access remote data from here. This will reduce network traffic. On server2 do following
p4 integ //remote-sguru/... //users/remote-sguru/...
Add following line to protect table
read user * * //users/remote-sguru/...

Tuesday, May 19, 2009

How to start sshd in AIX Machine?

- Login as root
- startsrc -s sshd #This will start sshd process
0513-059 The sshd Subsystem has been started. Subsystem PID is 27216.

- lssrc -s sshd #To check the status of sshd
Subsystem Group PID Status
sshd ssh 253954 active


We can set environment variables in the file /etc/environment. Variables set in this variables overrides variables defined in shell login scripts like .bashrc, when logged in using ssh

Monday, May 11, 2009

Perforce spec depot

No need to manage manually perforce specs like client spec, job spec, label spec, branch spec, protection table, etc. Perforce (from 2005.1 release) provides a special depot called specs depot which take care of managing these specs by automatically creating a new version under //spec for each spec modifications.
How to create spec depot?
- p4 depot spec (or any other name U wish)
- In type field change it to spec (by default it comes local)
- Save it. It creates //spec depot. From now on any modifications to specs are stored under //spec depot. For example client spec changes will be stored in //spec/client/client-name.p4s

How to populate spec depot with change details for specs already created?
Run command "p4 admin updatespecdepot -a" (This command available from 2007.3 release).
If you want to restrict population to specific type of spec, then modified command will be
p4 admin updatespecdepot -s spec
Ex: p4 admin updatespecdepot -s client

Monday, February 16, 2009

rsync: command not found error even though rsync installed in local & remote server (Solaris)

I was trying to take backup of some content from a Solaris machine to Linux machine using rsync. But it was giving a strange error

> /usr/bin/rsync -avuz --stats someuser@remote-solaris-machine:/export/CVS-xcert/* /export/HCL-CVS
bash: rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: remote command not found (code 127) at io.c(454) [sender=2.6.9]

Even though rsync installed in both local & remote machines, it is complaining about rsync unavailability in Solaris machine (Remote machine)

What happened here is that it was not able to find rsync in standard path in remote machine. The solution for this problem is

/usr/bin/rsync -avuz --stats --rsync-path=/usr/local/bin/rsync someuser@remote-solaris-machine:/export/CVS-xcert/* /export/HCL-CVS

In these type of problems we need to explicitly suggest the rsync path of remote machine through --rsync-path argument

Wednesday, February 4, 2009

MD5 and SHA1 checksums

A checksum or hashsum is a fixed-size data computed from an arbitrary block of digital data for the purpose of detecting accidental errors that may have been introduced during its transmissions or storage. The integrity of the data can be checked at any later time by recomputing the checksum and comparing it with the stored one. If the checksums do not match, the data was certainly altered.

A cryptographic hash function is a deterministic procedure that takes an arbitrary block of data and returns a fixed-size bit string, the hash value, such that an accidental or intentional change to the data will almost certainly change the hash value. In many contexts, the data to be encoded are often called the "message", and the hash value is also called the message digest or simply digest.

The ideal hash function has four main properties:
* it is easy to compute the hash for any given data,
* it is extremely difficult to construct a text that has a given hash,
* it is extremely difficult to modify a given text without changing its hash,
* it is extremely unlikely that two different messages will have the same hash.

In cryptography, MD5 (Message-Digest algorithm 5) is a widely used cryptographic hash function with a 128-bit hash value. MD5 has been employed in a wide variety of security applications, and is also commonly used to check the integrity of files. Collision has been found in the MD5 algorithm, meaning that you may get a same md5 hash value from two different files, indicate md5 hash is no longer unique. Therefore, some of the downloads uses sha-1 for data integrity checksum, one of the example is Fedora 7 DVD.

The SHA (Secure Hash Algorithm) hash functions are a set of cryptographic hash functions. The three SHA algorithms are structured differently and are distinguished as SHA-0, SHA-1, and SHA-2. No attacks have yet been reported on the SHA-2 variants. sha-1 uses 160 bits.

sha1sum is a computer program which calculates and verifies SHA-1 hashes. It is commonly used to verify the integrity of files. It (or a variant) is installed by default in most Unix-like operating systems, including Mac OS X. Variants include shasum, sha224sum, sha256sum, sha384sum and sha512sum, which use a specific larger hash function than SHA-1. Versions for Microsoft Windows also exist. Some weaknesses have been found in SHA1. However, sha1sum is still usable for general-purpose file checksumming, and is widely considered more secure than MD5 or a CRC.

md5sum is a computer program that calculates and verifies 128-bit MD5 hashes. The MD5 hash (or checksum) functions as a compact digital fingerprint of a file. It is extremely unlikely that any two non-identical files existing in the real world will have the same MD5 hash. The md5sum program is installed by default in most Unix, Linux, and Unix-like operating systems or compatibility layers. BSD variants (including Mac OS X) have a similar utility called md5. Versions for Microsoft Windows do exist. Note that a cryptanalytic attack on the MD5 algorithm has been found, which means a method has been found to calculate a file that will have a given md5sum in less than the time required for a brute force attack. Although it would still be quite computationally expensive to construct such a file, md5sum should not be used in situations where security is important (such as cryptographic hashing). It is still useful for general-purpose file integrity verification, such as protecting against random bit flips.

How to create MD5 checksum?
Let say you want to check the file getos.sh.
> md5sum getos.sh
02b0ca290739f9d50fa6591e3892d3dd getos.sh

With this, it prints out the 128 bit fingerprint strings. Tally the string you obtained with the provided one. Provider do the same way to obtain this string and publish to the site.

Another way let say you have more files to verify, you can create a text file, such as md5sum.txt
283158c7da8c0ada74502794fa8745eb ubuntu-6.10-alternate-amd64.iso
549ef19097b10ac9237c08f6dc6084c6 ubuntu-6.10-alternate-i386.iso
5717dd795bfd74edc2e9e81d37394349 ubuntu-6.10-alternate-powerpc.iso
99c3a849f6e9a0d143f057433c7f4d84 ubuntu-6.10-desktop-amd64.iso
b950a4d7cf3151e5f213843e2ad77fe3 ubuntu-6.10-desktop-i386.iso
a3494ff33a3e5db83669df5268850a01 ubuntu-6.10-desktop-powerpc.iso
2f44a48a9f5b4f1dff36b63fc2115f40 ubuntu-6.10-server-amd64.iso
cd6c09ff8f9c72a19d0c3dced4b31b3a ubuntu-6.10-server-i386.iso
6f165f915c356264ecf56232c2abb7b5 ubuntu-6.10-server-powerpc.iso
4971edddbfc667e0effbc0f6b4f7e7e0 ubuntu-6.10-server-sparc.iso

First column is the md5 string and second column is the location of the file. To check all them from file, do this:

> md5sum -c md5sum.txt

The output will be like this if success
...
ubuntu-6.10-desktop-amd64.iso: OK
ubuntu-6.10-desktop-i386.iso: OK
...


How to perform sha1 checksum?
The lines below are fedora 7 DVD iso’s Hash
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

96b13dbbc9f3bc569ddad9745f64b9cdb43ea9ae F-7-i386-DVD.iso

To perform sha1 checksum, it works similar to md5sum
> sha1sum F-7-i386-DVD.iso | grep "96b13dbbc9f3bc569ddad9745f64b9cdb43ea9ae"

Copy and paste the sha1 code and paste it with grep after the pipelines, if a line has returned indicate it passes the checksum, else, too bad :( , you have to download the iso again.

Ref: Wikipedia & linux.byexamples.com

Wednesday, January 28, 2009

Perforce branch locking

You can lock Perforce branch,depot or any other folder by using either
1) Protection table
2) Triggers

Using protection table:
You can lock a branch by just creating a new rule in Protection table. For example here, we are locking a branch called //dev/envision/esi/... by just providing only read permission to group xxx

read group xxx * -//dev/envision/esi/...


Here is a complex example.
The below protection table locks the branch //release/envision4.1sp1/... for every one else, except the user xxxats2. It locks for all the concerned groups release-xxx-4.0, xxx, yyy-group, yyy-all

                =write group release-xxx-4.0 * -//release/envision4.1sp1/...

                =write group xxx * -//release/envision4.1sp1/...
                =write group yyy-group * -//release/envision4.1sp1/...
                =write group yyy-all * -//release/envision4.1sp1/...
                write user xxxats2 * //release/envision4.1sp1/...

                =write group ddds * -//user_information/enVision/4.1_SP1/output/...
                =write group uuuu-info * -//user_information/enVision/4.1_SP1/output/...
                =write group yyy-all * -//user_information/enVision/4.1_SP1/output/...
                write user xxxats2 * //user_information/enVision/4.1_SP1/output/...



Points to remember:

  • The protection table order works from bottom to top. Hence place new locking rules below the existing rules in order for it be in effect.
  • =write : means, provide all other privileges ( like read, open), except write
  • Use "p4 protects //release/envision4.1SP1/..." command to find out which all groups/users already have write privileges. Remove write privileges from all the concerned groups


Using triggers:
You can also lock a branch by writing a changelist submission trigger of type change-submit. Here the trigger logic work something like, when a changelist comes for submission, check for files under branch which you want to lock and also check for some .lock file created in a location accessible to server. If .lock file is present then cancel change submission.
Cons: This will definitely slowdown the CL submission time.

Thursday, January 22, 2009

Setting up NFS server (Fedora core10) and client on FC6

NFS means Network File System. NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.
Benefits of NFS:
*) Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network.
*) There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network.

How NFS Works?
NFS consists of at least two main parts: a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly a few processes have to be configured and running.
The server has to be running the following daemons: portmap or rpc.portmap, rpc.nfsd, rpc.lockd, rpc.statd, rpc.mountd, rpc.rquotad.

Configuring NFS Server
Through Gui: Redhat & Fedora distributions provide a GUI interface to setup NFS server.
Just type "system-config-nfs" on a terminal to invoke GUI setup. Then click "Add" Button at top left. Provide directory which you want to share and allowed hosts. In case if you don't want to restrict hosts, then type "*" in hosts field. Select permissions read-write/read-only and then click "ok".

Through command line:
Edit /etc/exports file. An entry in /etc/exports will typically look like this:
directory machine1(option11,option12) machine2(option21,option22)

Example: /opt *(rw,sync)

To make more robust, you can configure optional files /etc/hosts.allow and /etc/hosts.deny

After this make sure daemons portmapper, nfsd is running in your system ("rpcinfo -p" command will help you in this regard). Then run the command "exportfs -ra" to force nfsd to re-read the /etc/exports file.

For more details refer http://www.linux.org/docs/ldp/howto/NFS-HOWTO/server.html


Configuring NFS Client:

Mounting remote directories:
Make sure portmapper and optional rpc.statd, rpc.lockd daemons running on your system. With these daemons running, you should now be able to mount the remote directory from your server just the way you mount a local hard drive, with the mount command.For example
mount master.foo.com:/opt /mnt/opt or
mount -t nfs master.foo.com:/opt /mnt/opt

The standard form of the mount command, is
mount -t type device dir
where -t vfstype( file system types)

Windows mount: mount -t cifs //128.222.180.22/channel /mnt/envision -o username=,domain=corp

Getting NFS File systems to be mounted at boot time:
NFS file systems can be added to your /etc/fstab file the same way local file systems can, so that they mount when your system starts up. The only difference is that the file system type will be set to nfs and the dump and fsck order (the last two entries) will have to be set to zero. For example
cat /etc/fstab
# device mountpoint fs-type options dump fsckorder
10.31.251.75:/export/kits/envision /export/kits/envision nfs defaults 0 0

Then run "mount -a" command and also if require umount

Details: http://www.higs.net/85256C89006A03D2/web/PageLinuxNFSClientSetup. and also man mount

Wednesday, January 21, 2009

Knowing processor information from Solaris Box

Solaris provides a command called psrinfo which displays information about processors.

For example to know whether your system runs on AMD or Intel processor, run this command with following arguments

bash-3.00# psrinfo -p -v
The physical processor has 1 virtual processor (0)
x86 (GenuineIntel family 15 model 4 step 8 clock 3391 MHz)
Intel(r) Xeon(tm) CPU 3.40GHz


As always, try out man psrinfo for detailed information.

How to expand vmware disk space?

You can expand vmware disk space in already installed VM Workstation by using a utility called vmware-vdiskmanager. This utility can be found in your VMWare Workstation installation directory i.e C:\Program Files\VMware\VMware Workstation.

Suppose if you want to increase your disk space from 8GB to 15 GB, then run this command as given below

vmware-vdiskmanager.exe -x 15Gb "Red Hat Enterprise Linux 4.vmdk"

Here the last argument needs to be replaced with your own .vmdk file for which you need to increase it's size.

If every thing goes well the output from above command will be
-----------------------------------------------------------------------------
O/P:
Using log file C:\DOCUME~1\guruss1\LOCALS~1\Temp\vmware-guruss1\vdiskmanager.log
Grow: 100% done.
The old geometry C/H/S of the disk is: 1044/255/63
The new geometry C/H/S of the disk is: 1958/255/63
Disk expansion completed successfully.

WARNING: If the virtual disk is partitioned, you must use a third-party
utility in the virtual machine to expand the size of the
partitions. For more information, see:
http://www.vmware.com/support/kb/enduser/std_adp.php?p_faqid=1647

-----------------------------------------------------------------------------

Make sure you logged out from your workstation when you execute this command.

For more help run "vmware-vdiskmanager -help"

Thursday, January 15, 2009

Use of .PHONY targets in Makefiles, an example

Targets that do not represent files are known as phony targets. Examples are standard phony targets such as "clean", "all". It also makes a target always out of date.

It is important to note that make cannot distinguish between a file target and phony target. If by chance the name of a phony target exists as a file, make will associate the file with the phony target name in its dependency graph. If, for example, the file clean happened to be created running make clean would yield the confusing message:
$ make clean
make: `clean' is up to date.

Since most phony targets do not have prerequisites, the clean target would always be considered up to date and would never execute.

To avoid this problem, GNU make includes a special target, .PHONY, to tell make that a target is not a real file. Any target can be declared phony by including it as a prerequisite of .PHONY:
.PHONY: clean
clean:
rm -f *.o

Now make will always execute the commands associated with clean even if a file named clean exists.

Here is a simple Makefile which demonstrates the problem and solution
------ Makefile --------------------------
all: print
print:
cat clean
clean:
rm *.o
--------------------------------------------
If you just call make, it outputs the contents of "clean" file present in the current working directory.
For Ex:
[siddesh@jadoo phony]$ make
cat clean
This is a clean script

Suppose if you want to run clean target to remove .o files, then you need to run "make clean". But you woudn't be getting desired output. Lets see what it reports

[siddesh@jadoo phony]$ make clean
make: `clean' is up to date.

Corrected Makefile
------------------------------------------
all: print
print:
cat clean

.PHONY: clean
clean:
rm *.o
--------------------------------------------
Now if you try "make clean", it will call clean target to clean .o files
[siddesh@jadoo phony]$ make clean
rm *.o