Linux Sysadmin Blog

Install TrueCrypt on Fedora 10

- | Comments

TrueCrypt is an open source encryption application, it has an ability to create hidden encrypted containers and file systems/volumes, it is portable and cross platform compatible. It allows to use cascading cyphers and encrypts/decrypts files on the fly. Be sure to read the FAQ and documentation before fully committing your files to TrueCrypt.

  • install via yum:
1
sudo yum install fuse fuse-devel wx_Base wx_GTK wx_GTK-devel
  • download source code package: http://www.truecrypt.org/downloads2
1
2
tar -zxvf TrueCrypt\ 6.2a\ Source.tar.gz
cd truecrypt-6.2a-source
  • Download RSA Security Inc. PKCS #11 Cryptographic Token Interface files
1
2
3
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h
  • build package
1
make
  • copy binary to /usr/bin
1
2
cd Main
sudo chown root:root truecrypt && sudo cp truecrypt /usr/bin
  • copy icon files to icon repository
1
2
cd ../Resources/Icons
sudo chown root:root * && sudo cp * /usr/share/icons

One last order of business is to setup your sudoers file to so that TrueCrypt does not complain about requiring tty for sudo command needed to mount encrypted volumes. There are 2 ways of doing that:

  1. The less secure way – disable requiretty globally by adding an exclamation mark in front of requretty,
1
2
3
4
5
6
# Defaults specification
#
# Disable "ssh hostname sudo ", because it will show the password in clear.
#         You have to run "ssh -t hostname sudo ".
#
Defaults    !requiretty`
  1. The more secure way especially for multi-user environments – create user alias called WHEELUSERS, assign users to this user alias:
1
2
3
4
5
6
## User Aliases
## These aren't often necessary, as you can use regular groups
## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname
## rather than USERALIAS
# User_Alias ADMINS = jsmith, mikem
User_Alias      WHEELUSERS = max`

Create a defaults entry for user alias disabling requiretty.

1
2
3
4
5
6
7
8
# Defaults specification
#
# Disable "ssh hostname sudo ", because it will show the password in clear.
#         You have to run "ssh -t hostname sudo ".
#
Defaults    requiretty
# added for truecrypt requiretty complaint
Defaults:WHEELUSERS     !requiretty`

Video below is a walk through of creating a TrueCrypt desktop short-cut and creation of encrypted container.

Ffmpeg Scratchy Sound

- | Comments

It was an issue reported by our developer as she’s having trouble getting the correct parameters for video conversion using ffmpeg. The problem is the scratchy sound when the conversion is done on our server (CentOS). So she tried it on local (Windows) using Pazera Converter and loaded the same parameters used on the server and got the desired result.

We checked the Ffmpeg versions, and on local we have SVN-r15451 while the server have SVN-r14991. So we upgraded our Ffmpeg on the server to the latest (at this time SVN-r19313) and this solved our problem. I’m not sure what version of ffmpeg have this fix but atleast it should be SVN-r15451 and up. :)

You can tell the difference from the sample videos (first 5 seconds) below - both were encoded using the same parameters.

Using Ffmpeg SVN-r14991

Using Ffmpeg SVN-r15451 and SVN-r19313

Install Apache Solr Multicore for Drupal

- | Comments

Yesterday I received new installation request from developers to install ApacheSolr module for Drupal. Check this link for more details on Apache Solr Search Integration. Since this is new to me I spent some time on searching and doing test installations. To make it short below is my setup on our shared hosting server running CentOS with Cpanel.

Type: Multi-core (for possible use on other Drupal sites) Java Servlet Container: Jetty (built-in on Solr) Drupal version: 6 Java: 1.6

I based this guide plainly from this DrupalĀ  page, and I made this summary for my own future reference.

Process:

You need to have the Java installed first.

1) ApacheSolr Drupal Module

1.1) Download and install ApacheSolr module to your site/s. Traditional download, extract, and enable method.

1.2) Download SolrPHPClient (PHP library) and extract the the files inside of your ApacheSolr Drupal module. Example:

1
sites/all/modules/apachesolr/SolrPhpClient

2) Solr:

2.1) Select a directory where you want to put your Solr files as long as it is not accessible to the web. Example:

1
/home/solr

2.2) Download nightly build of Solr and extract to your selected directory. Example:

1
/home/solr/apache-solr-nightly

2.3) Copy example directory to another directory like drupal. Example:

1
cp -r /home/solr/apache-solr-nightly/example /home/solr/apache-solr-nightly/drupal

2.4) Copy schema.xml and solrconfig.xml files from your ApacheSolr Drupal module.

1
2
cp /path_to_site/sites/all/modules/apachesolr/schema.xml /home/solr/apache-solr-nightly/drupal/schema.xml
cp /path_to_site/sites/all/modules/apachesolr/solrconfig.xml /home/solr/apache-solr-nightly/drupal/solrconfig.xml

2.5) Copy ”/home/solr/apache-solr-nightly/drupal/multicore/solr.xml” to ”/home/solr/apache-solr-nightly/drupal/solr/solr.xml

2.6) Create directory for each site that will use the ApacheSolr inside ”/home/solr/apache-solr-nightly/drupal/solr” and copy /home/solr/apache-solr-nightly/drupal/conf to each of them. Example:

1
2
3
4
mkdir /home/solr/apache-solr-nightly/drupal/solr/site_drupalsite1
cp -r /home/solr/apache-solr-nightly/drupal/conf /home/solr/apache-solr-nightly/drupal/solr/site_drupalsite1/
mkdir /home/solr/apache-solr-nightly/drupal/solr/site_drupalsite2
cp -r /home/solr/apache-solr-nightly/drupal/conf /home/solr/apache-solr-nightly/drupal/solr/site_drupalsite2/

2.7) Edit /home/solr/apache-solr-nightly/drupal/solr.xml and add the details/path of your site/s. Example:

1
2
3
4
<cores adminPath="/admin/cores">
<core name="drupalsite1" instanceDir="site_drupalsite1" />
<core name="drupalsite2" instanceDir="site_drupalsite2" />
</cores>

2.8) Start the Jetty servlet container.

1
2
cd /home/solr/apache-solr-nightly/drupal/
java -jar start.jar

2.9) Finally, visit Drupal Admin settings for ApacheSolr module to set the correct Solr path.
Example:

1
2
Drupal Site1:  /solr/drupalsite1
Drupal Site2:  /solr/drupalsite2

That’s it - we now have our complete ApacheSolr search integration. Check the ApacheSolr documentation for more details on using this module.

Solr server is started manually and to make it running on start up or if you want to be able to start/stop/restart the server, please refer to this blog post.

To add new sites (new sites with ApacheSolr module) just repeat steps 1 and 2.6 - 2.9, and restart the Solr server.

My First Amazon EC2 Setup (CentOS AMI)

- | Comments

Here’s my first try working with Amazon Web Services. Covered tasks are the following: * getting familiar with AWS, specially EC2 and S3. * working with EC2 instance using CentOS image - search, start/stop, and do some customization of an instance * create AMIs (private) and start instance from it. * S3 buckets - upload files.

I based my instructions on previous post on Howto Get Started With Amazon EC2 API Tools, so I won’t give details on some steps. And this post will cover mainly the steps taken to complete my objectives above.

To start, I signed up for an account and enabled EC2 and S3 services, and generate X509 certificate. Next, I selected a test server running CentOS 5.3 with Cpanel and installed java (openjdk 1.6, using yum) as a requirement.

Then, download EC2 API Tools and extract to my working directory at /myhome/.ec2/ and upload your private key and x509 certificates. Don’t forget to follow the filename format of cert-xxx.pem and pk-xx.pem.

Export shell variables (posted on the previous post) and specify the correct private and x509 path. Then run source /myhome/.bashrc or open new terminal to load new environment variables.

Setup EC2 keypair. At first i used the certificate from different account but i got the error below:

1
Client.AuthFailure: AWS was not able to validate the provided access credentials

I searched for this error and one suggestion is to chmod your certificate and key files to 600 but it didn’t help me. My problem is on our account because one of my teammates changed our account password and probably generated new keys. Anyway, this is where i signed up for a new account and proceeded without issues.

Search for the AMIs to use. Following the steps listed on the instructions, I tried several AMI’s (start/stop processes). I observed some AMI’s took longer to start compare to others but i have no idea why :). Btw, you can also search for AMI’s, and start/stop them from Amazon Management Console (EC2 Dashboard).

My next task is to create my private AMIs and here’s a good video tutorial on Customizing an existing AMI and create your own AMI from it. From this part that I need to setup my S3 bucket or directory to store my AMI. There’s a Firefox addon called S3Fox that my friend suggested but unfortunately i can’t install it on my Firefox due to some errors. I found and tried this BucketExplorer for creating my S3 Bucket. Btw, this one is commercial and you can try it for 30 days. I haven’t checked for other apps. :)

Back to creating my private AMI based on the above video, I ran into issue with ec2-bundle-vol command as it is not included on the AMI that i used, so i search for other AMIs that includes the EC2 Tools and found one from RightScale (CentOS5V1_10.img.manifest.xml).

After this i was able to complete my private AMI and start new instance from it using the above steps without any issues.

Moving Drupal / Civicrm Sites

- | Comments

In this guide i will provide the steps in moving Drupal sites with CiviCRM - with Drupal and CiviCRM in one or separate databases. I will outline the steps and sample commands but won’t give much details, so feel free to ask if you need any clarifications. Also, refer to my previous guide on ”HowTo Duplicate or Copy Drupal Site” for detailed instructions, commands, and sample shell scripts.

Moving Files:

  • Copy Drupal file and preserve mode (ownerships, permissions, etc) Example: cp -rp drupal_source drupal_destination Review your directory permissions on sites/default/files, sites/default/files/civicrm, and other directories.

  • Update references to Drupal url, path, and database details (name, user, pass, and host). Sample commands below using grep:

1
2
3
4
5
find /path/to/drupal -type f -exec perl -pi -e "s/example.com/example2.com/g" {} \;
find /path/to/drupal -type f -exec perl -pi -e "s/public_html\/example/public_html\/example2/g" {} \;
find /path/to/drupal -type f -exec perl -pi -e "s/db_name/db_name2/g" {} \;
find /path/to/drupal -type f -exec perl -pi -e "s/db_user/db_user2/g" {} \;
find /path/to/drupal -type f -exec perl -pi -e "s/db_pass/db_pass2/g" {} \;

Moving Database/s:

Case 1: Combined CiviCRM and Drupal Database.

  • Create sql dump of source database.
    Example: mysqldump -Q -udb_user -pdbpass db_name > db_name.sql
  • Import to destination database. Example: mysql -udb_user2 -pdbpass2 db_name2 < db_name.sql
  • Update references to Drupal url, path, and database details (name, user, pass, and host) of non-CiviCRM tables. You can use PhpMyAdmin to export this tables, then do the search/replace process on your local editor, and upload back the updates sql. You can also dump the tables from using command line (but you’ll have a long list of tables) and do the grep (same as above) and re-import the updated sql file.
  • Update CiviCRM configurations from Drupal Admin section. You need to update the “Resource URLs” and “Directories”.
    CiviCRM Admin Settings: Administer Civicrm > Global Settings > Directories (or use the direct url: /civicrm/admin/setting/path?reset=1) CiviCRM Admin Settings: Administer Civicrm > Global Settings > Resource Urls (or use the direct url: /civicrm/admin/setting/url?reset=1)
  • Optional: You can empty Sessions and Cache tables if you want.

Case 2: Separate CiviCRM and Drupal Database (recommended install for CiviCRM).

Process for this setup is almost the same as Case 1, the difference is on the import process for databases. I’ll just provide the complete info below.

  • Create sql dump of source databases.
    Examples: mysqldump -Q -udb_user -pdbpass db_name_drupal > db_name_drupal.sql mysqldump -Q -udb_user -pdbpass db_name_civicrm > db_name_civicrm.sql
  • Import directly the CiviCRM database.
    Example: mysql -udb_user2 -pdbpass2 db_name2_civicrm < db_name_civicrm.sql
  • Update CiviCRM configurations from Drupal Admin section. You need to update the “Resource URLs” and “Directories”.
    CiviCRM Admin Settings: Administer Civicrm > Global Settings > Directories (or use the direct url: /civicrm/admin/setting/path?reset=1) CiviCRM Admin Settings: Administer Civicrm > Global Settings > Resource Urls (or use the direct url: /civicrm/admin/setting/url?reset=1)
  • Update references to Drupal url, path, and database details (name, user, pass, and host) of Drupal database dump. perl -pi -e "s/example.com/example2.com/g" db_name_drupal.sql perl -pi -e "s/public_html\/example/public_html\/example2/g" db_name_drupal.sql
  • Import the Drupal database. Example: mysql -udb_user2 -pdbpass2 db_name2_drupal < db_name_drupal.sql
  • Optional: You can empty Sessions and Cache tables if you want.

That’s All!

Cacti and MySQL Counters Problem

- | Comments

We recently came across a problem with Cacti and the MySQL counters. For those of you who don’t know how to integrate MySQL statistics into Cacti have a look at this: http://code.google.com/p/mysql-cacti-templates/. These templates are a great way to gain some insight into how your MySQL database servers perform. The templates are actually PHP pages that query the databases through a variety of commands like SHOW STATUS and SHOW ENGINE INNODB STATUS.

The issue that we encountered was that some statistics like the InnoDB buffer pool activity were not displaying anything for one server. Other servers were displaying it just fine and other statistics for that server were also fine.

Among other things the SHOW ENGINE INNODB STATUS command shows deadlock information pertaining to the last deadlock that the InnoDB engine encountered. In some cases this information will be quite extensive and this causes a problem. The output of this command is one giant text field with a limit of 64KB. If the deadlock information is very large other information will get cut off which means certain statistics are lost. The easy fix for this is to restart the database server but in case this is not an option you can always use the innotop utility to wipe the deadlock information by causing a small deadlock.

Tracing Memory Leaks With Pidstat

- | Comments

Finding application memory leaks is important part of keeping systems stable and often very hard to track down. Monitoring application memory consumption can be performed in a few different ways, the easiest is a simple capture of ps and append to log file triggered via cron at desired interval. In this example we will track sshd memory usage via shell script.

1
2
3
4
5
#!/bin/bash

PID=`cat /var/run/sshd.pid`
ps -p $PID -o pid -o rss -o %mem -o cmd >> logname
exit

The output of log file will looks like this:

1
2
PID RSS %MEM CMD
2607  1036  0.0 /usr/sbin/sshd

Note the RSS column, it the values keep increasing with usage of application that would indicate a memory leak.

Another way to do the same thing is with pidstat which is a part of the sysstat package. The package can be installed via yum or aptitude but may not come with pidstat, if so download and install or build the latest version from here.

Pidstat is better way to track resources of an application because it has built in polling as well as combining process’s children statistics into the total. See pidstat man page options. In the script below we will use pidstat to track memory usage for sshd by polling 12 times every 5 minutes and e-mailing a report and writing to a log file. This script can also be run via cron.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/bin/bash
# track memory usage of sshd using pidstat and send report
# http://www.linuxsysadminblog.com - MaxV

export PID=/var/run/sshduse.pid
export TIMESTAMP=`date +%Y%m%d_%H%M%S`
export LOGDIR=/var/log/
export SSHD_LOG="${LOGDIR}sshd_memUsage_${TIMESTAMP}"
export SSHD_PID=`cat /var/run/sshd.pid`
export MAILTO=user@domain.com

if [ ! -e ${PID} ]; then

#create pid file
echo $$ > ${PID}

#log begin of script to /var/log/messages
/usr/bin/logger "Starting SSHD Memory Usage Tracker"

# pidstat portion, poll 12 times with 5 minutes apart
/usr/bin/pidstat -r -p ${SSHD_PID} 300 12 >> ${SSHD_LOG}

#e-mail report
mail -s "SSHD memory usage ${TIMESTAMP}" ${MAILTO} < ${SSHD_LOG}

#clean up pid file
if [ -f ${PID} ]; then
rm -rf ${PID}

#log end of script to /var/log/messages
/usr/bin/logger "Ending SSHD Memory Usage Tracker"
fi
exit 0
else

exit 0
fi

The output of this script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Linux 2.6.18-6-686 (hostname)       06/16/09        _i686_  (2 CPU)

14:21:26          PID  minflt/s  majflt/s     VSZ    RSS   %MEM  Command
14:21:56         2564      0.00      0.00    4932   1108   0.05  sshd
14:22:26         2564      0.00      0.00    4932   1108   0.05  sshd
14:22:56         2564      0.00      0.00    4932   1108   0.05  sshd
14:23:26         2564      0.00      0.00    4932   1108   0.05  sshd
14:23:56         2564      0.00      0.00    4932   1108   0.05  sshd
14:24:26         2564      0.00      0.00    4932   1108   0.05  sshd
14:24:56         2564      0.00      0.00    4932   1108   0.05  sshd
14:25:26         2564      0.00      0.00    4932   1108   0.05  sshd
14:25:56         2564      0.00      0.00    4932   1108   0.05  sshd
14:26:26         2564      0.00      0.00    4932   1108   0.05  sshd
14:26:56         2564      0.47      0.00    4932   1108   0.05  sshd
14:27:26         2564      0.00      0.00    4932   1108   0.05  sshd
Average:         2564      0.04      0.00    4932   1108   0.05  sshd

From the output we can see that “RSS” values are not increasing as time progresses which means that it’s a solid piece of coding.

HowTo: Get Started With Amazon EC2 Api Tools

- | Comments

This article is meant to be a quick quide that will introduce the things needed to get you started with Amazon EC2. All this information can be found in the EC2 api docs, and this is not meant to be a replacement of the documentation, just trying to show the things needed in a clear and short form.

Getting Started

First of all you will need one Amazon AWS account and enable the EC2 service; in case you don’t have this already now is the time to create your account. Once you do that you can safely return to this doc ;-)

Once you have your account working, while still on the aws site, go and create a new X.509 certificate (under the AWS Access Identifiers page, in the X.509 certificate section near the bottom, click Create New). Once this is done, you will want to download locally the private key file and X.509 certificate.

EC2 API tools

Next you will have to download and install the Amazon EC2 api tools on one system (controlling machine) that will be used to start your EC2 army of servers, and control their usage. You will want to use the latest version (2009-05-15 at this time) as it will support all the features Amazon is offering for the EC2 service.

The only real dependency of the EC2 API tools is java (at least version 1.5) so we will want to install that first. If you are running debian you can easily do this just by running (for lenny): aptitude install sun-java6-jre while for etch you will have to use: aptitude install sun-java5-jre For other distributions you can either use their internal packaging mechanism (in case they provide sun-java packages) or just download the binary from sun and install it manually.

Extract the EC2 APi tools (it is a zip archive called ec2-api-tools.zip) and move it under a folder of your preferece. I like to use ~/.ec2 for this, but you can use any folder you prefer. Also copy the private key and X.509 certificate in the same directory. Those files will look like cert-xxx.pem and pk-xxx.pem.

Next we will have to export some shell variables. A good place to put this is in ~/.bashrc:

1
2
3
4
5
6
7
8
export EC2_HOME=~/.ec2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=$EC2_HOME/pk-xxx.pem
export EC2_CERT=$EC2_HOME/cert-xxx.pem
#Java home for debian default install path:
export JAVA_HOME=/usr
#add ec2 tools to default path
export PATH=~/.ec2/bin:$PATH

Finally source the file to have the changes active in your current shell session: source ~/.bashrc or just open a new shell before starting to use the API tools.

EC2 Keypair

We will need to create one keypair that will be used to connect using ssh to the EC2 instances we will be using. We will use the ec2-add-keypair utility to create the key and register it with amazon: ec2-add-keypair my-keypair This will print out the private key that we will have to save in a file: `cat > ~/.ec2/id_rsa-my-keypair

paste the private key content

sudo chmod 600 `~/.ec2/id_rsa-my-keypair

Running your first EC2 instance

Amazon EC2 uses the concept of AMIs = Amazon Machine Images. Any EC2 instance is started from one AMI. You can either use standard, public AMIs or create and customize your own private images. Creating or modifying existing AMIs is beyond the scope of this article, but just as a general information this is done using special AMI tools. Also before building your AMI you will want to ensure if you want to use a ‘small’ type of image (i386 os) or a ‘large’ type of instance (64bit os). These are described under http://aws.amazon.com/ec2/instance-types/

For the scope of our article we will find a standard public image and start one instance of it to see that all is working properly with the EC2 api tools. You can see all the public images using: ec2-describe-images -a (over 2,300 images ;) ). You should probably grep the result to get any useful information. There are many good public images to use, like for example the alestic ones (for debian and ubuntu) Having the AMI id of the image we want to use we are ready to start our fist EC2 instance: ec2-run-instances ami-e348af8a -k my-keypair that will start a small instance with a 32bit debian lenny server instance from alestic.com.

ec2-describe-instances - this will describe the status of all the running instances, with their hostname, instance id, etc.

ec2-authorize default -p 22 - in order to connect to your instance you will need to customize the ‘default’ firewall rules for your account. The above rule will allow ssh on port 22 from anywhere. If you want to open http traffic you will have to add a rule like this: ec2-authorize default -p 80

Finally we can ssh to the ec2 instance using: ssh -i ~/.ec2/id_rsa-my-keypair root@ec2-XXX-XXX-XXX-XXX.z-2.compute-1.amazonaws.com where ec2-XXX-XXX-XXX-XXX.z-2.compute-1.amazonaws.com is the actual hostname of the instance as obtained from ec2-describe-instances.

Note: don’t forget to stop your instance when you no longer need it. EC2 is a service paid as you use, hence if you forget your instance running you will be billed for it ;-). You can do this by running shutdown inside the instance or by using: ec2-terminate-instances i-yourinstance and verify with ec2-describe-instances that the instance is indeed stopped.

Next step is to create/customize your own EC2 AMI images based on your needs. This will be covered in a future article. Hopefully you found this article useful, and it will get you on track quickly with Amazon EC2 api tools.

Moving Magento Sites

- | Comments

This is my first guide in moving Magento site to another site or server. I’ve completed few Magento site transfers as we recently develop and host Magento sites. I also encountered several issues in transferring sites, and searched for different approaches from other blog/forum sites, such as installing a new Magento instance on destination server. I checked the official Magento guide in moving site but I haven’t tried it yet since my current process works fine. This is similar to my procedures when moving other sites such as Oscommerce, Drupal, Wordpress, etc. I assume you have ssh access to your server.

Database:

Export/dump source database. You can do it via Phpmyadmin or using mysqldump from command line.

1
mysqldump -udbuser -pdbpass dbname > filename.sql

Update your database dump file before importing to destination database. You need to change the referrences to source url/domain, path/home directory, database details such as database name in case you have tables that referrence to other tables (foriegn keys etc). You can edit your sql file or dump file with your editor and do the search and replace. For me i do this directly using perl/sed.

1
2
perl -pi -e "s/source.url/destination.url/g" filename.sql
perl -pi -e "s/path\/to\/source\/install/path\/to\/destination\/install/g" filename.sql

Import your modified database dump to destination database using phpmyadmin or via command line.

1
mysql -udbuser -pdbpass dbname < filename.sql

Files:

Copy all of Magento files to your destination site or server and preserve attributes

Empty contents of var/cache and var/session directories

Same process with database, update references to url, path, database details, and other variables that

differs from your source and destination server (ex: db host). Ex:

1
2
3
4
5
   find /path/to/destination/site -type f -exec perl -pi -e "s/source.url/destination.url/g" {} \;
   find /path/to/destination/site -type f -exec perl -pi -e "s/path\/to\/source\/install/path\/to\/destination\/install/g" {} \;
   find /path/to/destination/site -type f -exec perl -pi -e "s/source_dbname/destination_dbname/g" {} \;
   find /path/to/destination/site -type f -exec perl -pi -e "s/source_dbuser/destination_dbuser/g" {} \;
   find /path/to/destination/site -type f -exec perl -pi -e "s/source_dbpass/destination_dbpass/g" {} \;

Check if var/ and media/ directories are world-writable (777)

That’s all check your new site and watch out if there are links coming from your old url.