Linux Sysadmin Blog

CentOS 4.8 Finally Released!

- | Comments

After a long development time, Centos 4.8 was finally released on the 21st August. This is a good thing that after the latest problems between the centos developers, they were able to pull this out finally and now be able to focus on the upcoming 5.4 release.

There are no major changes in this update, mostly bug fixes and security fixes, and it should be a quick and easy upgrade for most people still running the 4.x branch (you should really consider comments: true upgrading to 5.x ;) ).

Beyond 4GB Ram on 32bit Linux

- | Comments

Thinking of upgrading your 32 bit Linux machine beyond 4GB of memory?  Consider the following:

Does your CPU support Physical Address Extension to address beyond 4GB? Here is how to check:

1
2
grep pae /proc/cpuinfo
flags     : fpu vme de pse tsc msr **pae** mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe cid xtpr

If you see “pae” flag returned install a PAE kernel and reboot. On CentOS 5.x as root:

1
yum install kernel-PAE.`uname -m` && reboot

Do you have a single process that may need to address more than 4 GB of RAM? If so, you need to use a 64bit kernel. Check to see if your cpu supports 64bit instructions.

1
2
grep lm /proc/cpuinfo
flags     : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx **lm** constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm ida

if you see “lm” flag then your cpu supports 64bit instructions, it is possible to upgrade your current distro to 64bits which is normally not supported by most distro’s or reinstall the entire thing from scratch.

Using Wildcards in Nginx Valid_referers

- | Comments

This quick post will show how we can easily allow only certain http referrers see some location using nginx. This might be useful for example if you are using nginx as a static content provider and want to not allow everyone hot-linking your images and only your own sites. Doing something like this in nginx is very simple and we could start with a configuration like this (from nginx.conf):

<code>location /images {
    valid_referers   none  blocked  server_names  mydomain.com www.mydomain.com;
    if ($invalid_referer) {
        return   403;
    }
    ... else serve the content
}</code>

This works fine in this simple case; but what if we need to list more sub-domains? like images.mydomain.com an static.mydomain.com, etc? It would be nice to be able to use a regexp for this, right? Fortunately nginx has support for this and this can be done using a valid_referers line like: valid_referers none blocked server_names ~(mydomain.com)

And this will match all the subdomains *.mydomain.com. Going even further you might want to allow google as a referrer for you content. Still google has so many subdomains and even different domains (like google.com, google.de, etc.) For this we could add ~(google.) and have our final configuration look like this:

<code>location /images {
    valid_referers   none  blocked  server_names  ~(mydomain.com|google.);
    if ($invalid_referer) {
        return   403;
    }
    ... else serve the content
}</code>

This simple example shows how powerful the configuration of nginx is and how easy it is to do things that are rather impossible with other similar softwares.

Using Svn+ssh With a Non-standard Ssh Port

- | Comments

Many people use subversion over ssh for a simple and secure way to work on remotely hosted svn repositories. This is normally as simple as running: svn co svn+ssh://user@server/repo .

If the remote ssh server is not running on the default ssh port (tcp 22) then this needs a little tweaking to get it working. Normally I was expecting that adding a custom entry for the svn server in the /etc/ssh/ssh_config file with the appropriate port would make this work on the fly without changing the command line; or if not, adding the ssh port in ‘telnet like’ way: server:port would make a difference. Still none of those worked and in order to get this working I had to dig into the subversion documentation on how we can define a special tunnel.

We can define a new tunnel in the svn configuration file (.subversion/config): [tunnels] sshtunnel = ssh -p <port>

And after this we can use svn as usual but with a url like svn+sshtunnel:// : svn co svn+sshtunnel://user@server/repo .

System Admin Day - July 31st

- | Comments

We will celebrate the Linux System Administrator Appriciation day tomorrow. Yes, there is such a thing, if its in wikipedia (wiki system administrator appreciation Day), than it must be true!

See what its all about here:

System Administrator Appreciation Day, also known as Sysadmin DaySysAdminDay or SAAD, was created by system administrator Ted Kekatos. Kekatos was inspired to create the special day by a Hewlett-Packard magazine advertisement in which a system administrator is presented with flowers and fruit-baskets by grateful co-workers as thanks for installing new printers.

Here is one of our own getting some appreciation from our colleagues.

Max, our gets nominated as a favorite linux system administrator by some of his office fans

So don’t forget to buy your system administrator a coffee, beer, donut, or whatever he prefers, and dont ask any stupid questions tomorrow.

The system administrator song

And here is a link to another System administrator song from the UK

http://www.ukuug.org/sysadminday/

Task on Amazon EBS on CentOS AMI

- | Comments

This is my second activity on using AWS - this time the use of EBS.

Objectives:

  1. Format a new EBS (10GB) and mount it on a running instance of private AMI (created on first activity - add link/ref to old post)
  2. Setup a MySQL server with the datastore on EBS partition
  3. Setup the partition(EBS) to start at boot time of AMI

Here, I will elaborate the steps (mostly commands) and some issues that I encountered along the way. I also included the script (below) that i used for attaching the EBS to AMI at boot time. Reference here. I will add an indicator on where i am running my commands, either on controling machine or on instance. On variables or values i assumed that you already know how to get them, the ec2-describe-instances/volume..etc. If the ec2 commands is not available on your system make sure you have the ec2 api tools or have your environment variables configured.

Objective #1: Format EBS and mount on a running instance

  • Run instance of private ami and take note of the zone (default is us-east-1a - not sure :)) controlling machine$: ec2-run-instances -z us-east-1a --key YOURKEYPAIR ami-xxxxx

  • Create ebs volume with 10GB size. Note the use of same zone so the volume can be attached to the instance above. Check the EBS docs for more details on Zones. controlling machine$: ec2-create-volume -z us-east-1a -s 10

  • Attach the zone to your instance, ex: as /dev/sdh controlling machine$: ec2-attach-volume -d /dev/sdh -i i-IIII1111 vol-VVVV1111

  • Login to your instance and format your ebs drive on /dev/sdh. It’s your choice on what filesystem to use. For my activity, i used xfs as i was advised that it is easier/faster to increase/shrink xfs filesystem compared to ext3 - and on the above reference xfs as used. controlling machine$: ssh -i ssh_key root@ec2.xxxxx.amazonaws.com (host may not be on this format, just refer to the details on your instance) instance$: yum install xfsprogs instance$: modprobe xfs instance$: mkfs.xfs /dev/sdh

  • Mount the ebs volume. instance$: mount -t xfs /dev/sdh /ebs

Objective #2: Setup a MySQL server with the datastore on EBS partition

  • Install mysql on your running instance, edit /etc/my.cnf and set the value for datadir to /ebs (my example), and start your MySQL. instance$: yum install mysql-server instance$: vi /etc/my.cnf instance$: /etc/init.d/mysqld start

  • Create a sample database to test instance$: mysql mysql> create database ebstest; mysql> quit instance$: ls /ebs/

Objective #3: Setup the partition(EBS) to start at boot time of AMI

  • I was advised here to create an init script that will attach the ebs volume to my running instance and i was given a sample script (for debian) that i modified to my need (for CentOS) and added some stuff. I encountered several issues here as my init script failed to start correctly, like my environment variable is not available or incorrect paths etc. And was able to bundle four or five times. :) In short the script (below) does the automation, i only need to add this on my start up - so for the process, please check or continue reading the notes/comments on the script below. Btw, I just added the section to start MySQL inside the init script, but of course you can separate them.

  • After creating a init script with the correct variables/filenames, bundle or create new AMI. Commands below are summary from a video tutorial - i forgot the link :) Run help for each command to get details on the options used, ex: ‘ec2-bundle-vol -h’. instance$: cd /mnt instance$: mkdir ami instance$: ec2-bundle-vol -d /mnt/ami -k /root/.ec2/pk.xxx.pem -c /root/.ec2/cert.xxx.pem -u xxxx-xxxx-xxxx instance$: ec2-upload-bundle -b bucket1 -m /mnt/ami/image.manifest.xml -a XXXXXX -s xxxXXXXx controlling machine$: ec2-register bucket1/image.manifest.xml

  • Test your new AMI - run new instance and check if your ebs volume is attached - goodluck!

Init Script Here: mountebs

Enabling Allow_url_include Locally in cPanel

- | Comments

When using cPanel the way to enable allow_url_include directive locally (per user) is to create an Virtual Host include:

First create an include file:

/usr/local/apache/conf/userdata/std/2/username/domain.com/custom.conf

Add directive to custom.conf:

1
php_admin_flag allow_url_include On

Then run to enable include:

/scripts/ensure_vhost_includes --user=username --verbose

Alternatively, enabling allow_url_include globally (server-wide) is done by editing /usr/local/lib/php.ini and adding “allow_url_include = On” directive to the Fopen wrapper section.

1
2
3
;;;;;;;;;;;;;;;;;;                                                                                                                                           
; Fopen wrappers ;                                                                                                                                           
;;;;;;;;;;;;;;;;;;
1
2
3
;Whether to allow the treatment of URLs (like http:// or ftp://) as files.                                                                                  
allow_url_fopen = On
**allow_url_include = On**

and restarting apache by issuing service httpd restart command as root.

Waiting for SSH Login Prompt

- | Comments

Are you often waiting over 1 minute to get a ssh prompt? This can be caused by several things however more often then not is a missing PTR record for server address and enabled GSSAPIAuthentication in ssh_config. GSSAPIAuthentiction is Kerberos 5 centralized authentication/authorization mechanism that relies on resolving a hostname for proper operation, when it cannot do so it tries 3 times before falling back on the next authentication mechanism.

First you need to see where the login process gets hung up:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ssh -vvv server_address
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug3: start over, passed a different list publickey,gssapi-with-mic,password
debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup gssapi-with-mic
debug3: remaining preferred: publickey,keyboard-interactive,password
debug3: authmethod_is_enabled gssapi-with-mic
debug1: Next authentication method: gssapi-with-mic
debug3: Trying to reverse map address server_address.
debug1: Unspecified GSS failure.  Minor code may provide more information
No credentials cache found
debug1: Unspecified GSS failure.  Minor code may provide more information
No credentials cache found
debug1: Unspecified GSS failure.  Minor code may provide more information
debug2: we did not send a packet, disable method

and check if a PTR record exists:

1
2
3
4
5
6
[max@linux ~]$ dig -x server_address
; <<>> DiG 9.5.1-P2-RedHat-9.5.1-2.P2.fc10 <<>> -x server_address
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 20960
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
1
2
;; QUESTION SECTION:
;sserdda_revres.in-addr.arpa. IN  PTR
1
;; Query time: 87 msec

Here we see that in fact we are being hung on the gssapi-with-mic method and there is no PTR record for the host. The quickest and simples way to resolve this is to disable gssapi-with-mic authmethod globally on the client. In RedHat/Fedora Linux edit /etc/ssh/ssh_config and make sure you have an uncommented “GSSAPIAuthentication no” line for Host *

1
2
3
4
5
6
7
8
9
# Host *
#   ForwardAgent no
#   ForwardX11 no
#   RhostsRSAAuthentication no
#   RSAAuthentication yes
#   PasswordAuthentication yes
#   HostbasedAuthentication no
     GSSAPIAuthentication no
#   GSSAPIDelegateCredentials no

If you are using host-based configuration be sure to put this at the top of the file so it takes priority over the defaults below it.

1
2
3
4
5
Host server_name
HostName server_address
Port 22
User max
GSSAPIAuthentication no

Syntax Error on MySQL Replication Slave (Error 1064)

- | Comments

Here’s an interesting one, what if you have a MySQL replication setup and the slave stops replicating with a syntax error? The slave should be executing the exact same commands as the master, right? Well, as it turns out, yes and no. There is a bug in MySQL that has been fixed in 5.0.56 according to the bug report. It’s a long story and it’s worth the read but what happens is that a timeout in the network connection between the master and the slave can cause the master to resend part of packet that it sent before. The slave handled the previous packet correctly so it’s not expecting a resend and as a result it starts writing some garbage to the relay log (which is where it stored the statements it will execute). The SQL command gets mangled in the process and when the slave tries to execute it, voila, a syntax error.

To fix this you can use the CHANGE MASTER command to set the slave to the master bin log file and position that shows up in the SHOW SLAVE STATUS output. Make sure you use the Relay_Master_Log_File and Exec_Master_Log_Pos fields since they indicate what position in the master binlog the slave actually thought it was executing. Keep in mind that corruption and its effects are hard to predict. It will definitely be useful to compare the master and slave afterward using the MaatKit tools.

As some more background, the server log will be probably show and error like this to indicate there was a network error:

1
Error reading packet from server: Lost connection to MySQL server during query (server_errno=2013)

And finally, if you do read the entire bug thread you will notice that the original developer of MySQL also has an opinion on this.