Linux Sysadmin Blog

Debian FTP Archive for Etch - archive.debian.org

- | Comments

Debian Etch has been discontinued for a while now, and in an ideal world everyone has upgraded to lenny a long time ago. Still, this is not always possible and there are some systems out there that are still running etch, some for a good reason, some just because their admins are lazy ;) . Recently we worked on a project with such a system that was still running etch, and the devs on the team told us that all is working perfect but they are no longer able to install new packages using apt tools. Hmm… let’s see.

And indeed running apt update was giving a  404 error, as the etch files are no longer in the main ftp archive (on ftp.debian.org):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apt-get update
Ign http://ftp.debian.org etch Release.gpg
Ign http://ftp.debian.org etch Release
Ign http://ftp.debian.org etch/main Packages
Ign http://ftp.debian.org etch/non-free Packages
Ign http://ftp.debian.org etch/contrib Packages
Err http://ftp.debian.org etch/main Packages
404 Not Found [IP: 130.89.149.226 80]
Err http://ftp.debian.org etch/non-free Packages
404 Not Found [IP: 130.89.149.226 80]
Err http://ftp.debian.org etch/contrib Packages
404 Not Found [IP: 130.89.149.226 80]
Fetched 4B in 1s (3B/s)
Reading package lists... Done
W: Couldn't stat source package list http://ftp.debian.org etch/main Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_main_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.debian.org etch/non-free Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_non-free_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.debian.org etch/contrib Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_contrib_binary-i386_Packages) - stat (2 No such file or directory)
W: You may want to run apt-get update to correct these problems

and the apt sources line causing this error was (from /etc/apt/sources.list): deb http://ftp.debian.org/debian/ etch main non-free contrib

We could not upgrade the machine because of internal constrains, and this was not even the scope of our project, but we needed to install some new debian packages we had to point the apt sources to a new place and this is to the archive.debian.org that continues (and will continue) to have the etch files. Basically our new apt sources became: deb http://archive.debian.org/debian/ etch main non-free contrib and this made it possible to complete our project and install the needed libraries. A few weeks after we finished the project we were hired for a new project to perform the upgrade to lenny, but this is a different storry.

I hope you found this post useful, in case for some reason you are still running etch and need to find a proper etch mirror to install new softwares as needed. Of course I would urge you to upgrade to lenny, or even to squeeze if possible, as etch is no longer supported, you have no longer security patches, etc.

HowTo Remove a List of Files

- | Comments

Here is a quick tip on how to remove a list of files. Let’s say you have the list of files inside a file called files_to_remove. Usually I would do something like this:

LIST=\`cat files_to_remove\`

and then

ls -al $LIST

just to check what is in the list and if it looks good.

And finally:

rm -vf $LIST

Svnadmin: Can’t Open File ‘svn/db/fsfs.conf’: No Such File or Directory

- | Comments

While working on setting up a backup script for a subversion repository I encountred an interesting problem. I’ve done this before many times, on different repos, and haven’t seen any issues, but in this case the backup command that is using the built-in svnadmin hotcopy command was failing with this error:

1
2
svnadmin hotcopy --clean-logs /svn/repo/ /backup/repo/
svnadmin: Can't open file '/svn/repo/db/fsfs.conf': No such file or directory

Hmm… looking at the respective path I can see that the command is not lying and that file fsfs.conf is indeed not present. I could find a file fs-type but not fsfs.conf. So my only assumption was that this is an older repository created with an older svn version than the one we were running currently. Checking the existing svn version I got:

1
2
3
svn --version
svn, version 1.6.11 (r934486)
compiled Apr 20 2010, 00:24:22

and the fact that this repo was very old (~2009) made my assumption sound correct. Ok, now what? Well in this situation my first thought was to use the svnadmin upgrade command; from the manual it looked like this is what I needed to fix this issue:

“svnadmin help upgrade upgrade: usage: svnadmin upgrade REPOS_PATH

Upgrade the repository located at REPOS_PATH to the latest supported schema version.

This functionality is provided as a convenience for repository administrators who wish to make use of new Subversion functionality without having to undertake a potentially costly full repository dump and load operation.  As such, the upgrade performs only the minimum amount of work needed to accomplish this while still maintaining the integrity of the repository.  It does not guarantee the most optimized repository state as a dump and subsequent load would.”

After I’ve made a manual backup archive of the repo (a simple tar.gz of the repo folder) I ran the upgrade command sure this is going to fix my issue:

1
svnadmin upgrade /svn/repo/

and after it completed, I verified that svn was still working as expected and checked for the fsfs.conf file. But that was not created… Hmm… Let’s try the hotcopy command anyway:

1
2
svnadmin hotcopy --clean-logs /svn/repo/ /tmp/repo/
svnadmin: Can't open file '/svn/repo/db/fsfs.conf': No such file or directory

the exact same error.

Trying to understand what the fsfs.conf file contains I just created a new repository to see if it gets created. Indeed my v1.6.11 of svn created the file for a new repo, and after copying it to the location of my existing repository (as it was basically just an empty file) my issue was fixed and the hotcopy command started working. Here is the content of the file as created by my svn version, that I copied in the older repo to fix this problem:

<code>cat fsfs.conf
### This file controls the configuration of the FSFS filesystem.

[memcached-servers]
### These options name memcached servers used to cache internal FSFS
### data.  See http://www.danga.com/memcached/ for more information on
### memcached.  To use memcached with FSFS, run one or more memcached
### servers, and specify each of them as an option like so:
# first-server = 127.0.0.1:11211
# remote-memcached = mymemcached.corp.example.com:11212
### The option name is ignored; the value is of the form HOST:PORT.
### memcached servers can be shared between multiple repositories;
### however, if you do this, you *must* ensure that repositories have
### distinct UUIDs and paths, or else cached data from one repository
### might be used by another accidentally.  Note also that memcached has
### no authentication for reads or writes, so you must ensure that your
### memcached servers are only accessible by trusted users.

[caches]
### When a cache-related error occurs, normally Subversion ignores it
### and continues, logging an error if the server is appropriately
### configured (and ignoring it with file:// access).  To make
### Subversion never ignore cache errors, uncomment this line.
# fail-stop = true

[rep-sharing]
### To conserve space, the filesystem can optionally avoid storing
### duplicate representations.  This comes at a slight cost in performace,
### as maintaining a database of shared representations can increase
### commit times.  The space savings are dependent upon the size of the
### repository, the number of objects it contains and the amount of
### duplication between them, usually a function of the branching and
### merging process.
###
### The following parameter enables rep-sharing in the repository.  It can
### be switched on and off at will, but for best space-saving results
### should be enabled consistently over the life of the repository.
# enable-rep-sharing = false</code>

Hopefully this will help others seeing the same issue I was experiencing.

Google 500 Error

- | Comments

Funny, here is what you get to those who have not received this yet.  I got this while on YouTube unlink accounts:

500 Internal Server Error

Sorry, something went wrong.

A team of highly trained monkeys has been dispatched to deal with this situation.

If you see them, show them this information: 9tE7ZMgxBSriv_77QysO1ltAPvPBKMi_nAq0HsNCM4reIvdRU5qCmvvn-uG9 x4XTgNjGQxteXotgBV7IeeugS_eQq17zaz5OoKdyub6gP7ilQgI3bUWbZQQ7 VX6fbW3FBjOrkkOmvC1ROJHEZkysx_o6a4Y8n_vXzFL3ymU_i87bXN8RBzuk l2JNYFAGdoNaLq_Q-bPXixNlQPoLwIAKECq7ZHGQPCDMe8tdkZCu6QE3Lz1v dF5jgpnm-qmwZTPj695svH5zsCh9QkmD9SMsUVj10fETMVqgHwV6HL1XXtfS w0uSnunW5hdVOMZ85VlQmH8w0lemC2k9C37y3yX69IyK_2NSeu1FE3F6nB90 XLHxG9BkSASeeo9z2BNKuDWoED_p9SJubi-xJ4vl1cPO5yXcwhoDWJ8icF-W 89IBGxsXYOoBshN-kULVBl0qCLx-OAL9b7cwIOpAi10NahEZzhjqsCLAYbh_ 55mSawO0JyeQOMGvDH1smY-rm7uyCwoW5gjGf04C_s0pi-gHLduRf1DSlHMj o11Lxme77acBzZG-gbW_Yvm9TClYQjHbbMEore2qmcRsGc7-PpcjG8I4fmjc UwXd1jc87ELIafKFhbEfKKSOAEAzu5OjTPLgsexW0oL9LuxEcxAYu6rugY6Q rlawjjaOloYkQmiKCmIKjKVa8TB3GLn1Q6U00GCpJ0U53s8YnFWFAK9y8XJy uuPQzDzxpMWKa11FYzxgiSKr_JyDRSDh1Dg65hXyRGcf-Hs0mcdckvOFOWwD K0dFbrcgxPqXVgsVpEjhxjVJnAftt3AKePsaPcKzW8k3BxlBGsyqWisk_gQ9 DuysZLb9jAI5zZVIGvDbN0uGM9pc6ARffEn3MkVHqdAnnzZ7WusTZ0bMT-vf dTGlcycgcjx1StBKEcNhTYc1F_1FCf_ONmrMQHWv91y3HIV4b1vf3mZSIsVO wm2uf0WjBFWr2uLWLuD2f504OptdEe4qxHhtVPGnbZeVRTJ39JX4nMNScDg1 K2hkc9-YOiAhjgImpMhh2WdZvlOMA3t_NsiUDNHbOW5NogoxxMUZDmG0mTtz oFvH8lVWv9BuHqsMHurzuo-yDFTj_90Z1ephEDor3sFPqX41u3mfRXxUJxCx kS6cwQ5HwS1xnojbAGMu5O5ES0tHzIdUcyZs73GW-6V21fHI2vSkyyrcrVf6 WYOBeI4EBMmYZhuh9vRwzorKmj3Bim2vf-GRhseqY2Blkbt6Wo2A0Un0Q7Pc Rev33zueEQz8IIVG6xVG8SMkyZSV1737Afc3amMSLtzTN1-i3rrGhLXk6rF6 p7PWdC_6UqNvFjenQ69EkE9iFYboyRMisoy2eE0_gfbSAlFnjqp4oaLwYO5y St1SKE_v8DHXcCmVQDEzubtmrJ9M6ARwTxvQFT_hJ7rXNsapPg9j_dnzHcT0 -5xpiJV9tfUDgq1iBndScPLluK0IYfwtfOtH5BLCiD_foBFBNq9b2vPO4_Fs GZKEVq4DCupWYCby3VIZAP1GYeixzpHtmBp1P7g5vTTfaq0ENfslcCiI4QQg 4AamEM14L6YMQbUr_C7hC375hzbyrQJ1ZXi7nCazu_USD5CnxdVaEb3NKKTl K-3CfrsO6QpwerA-LmYim0ys1fdG3xrSRy1ewyYXgBGcS4o5nyTceLae2_4c ool3SlAKOJl68q09ziQSubSjGpLaOj8Pxuo8QAwhl9dvLzJKQ6HneQiG7OdU mRZ_ONm9_7pH3iJdulfWR5sa-2uytUmzce4fNTKDU_JWHhUIZbsndxU5_NO3 jFtTJK_eysUBnx2WY7nSQlbfD6FkVpznQ0ZesbaZHK2gs903Bak_8y9jic01 gX1dhfkRuFgoB6mq6w4vPDGUNFj99AAWl1HiSQBtv75tONpwn8EF_qvy2417 sC_nzYgPuZzDz_phKWS-HO3Zw-DNrt9Eo8-RJ3b1bzFfU0PVp3vEAPpFHsWS G3k2m6KhrD0BDbPMWyYwTsW0F_Np1qFG6ulr48HqkgexnXKg_MrG2rLHd_FL -hwtu-0p-nMOSSPx1kdzve-rqiXVBbUqaj5QR2tSq24Flf0HkpDJ-tfGeVKc rtJ-U1eYWZJho9L0MIaqBlOXgCtqB8lVrHlq8_LuCKMZGKsbn5HowL58Ug-a zF-EAd7eMuZxjdoZqqiMOXJMK8A56MJFq5GQuvhw_P9tstCczq9688Qh6vUN OYBiZ39m7L_5FoeN_u3A1NX2FUrb5ocXh8DWC21qTHaL3i0yEKWxZN3RMEY7 BMwO4I9pqYmNVTNHceRw9kjoXyp6qh-Yrf_8yRrb-bf3buk7uldHQjOa4qDw yMFLTjejxiVpOJWYlymy5bGCGdCJDgf4_F6fYshPVUi6Rai7lF3DDuwJNquu OwhO6q3b0pubxrw7w64Z9eazfybD2ZrvyVZJnowRHV7O2Ixb0fK016BW2cXq _7V_fnM55BH5sr97xdJDSRX27WmduvDVqPnRqj2nicvWgaGnIvbsGAKJJXn2 7rICyih2gzF-U-d9YCBktgoCpTk9up_aXqT_oOqkTHwKnt4ksaDPkcwh3laj 9E-9rDEgBZ6Fbn8UpHJrxkxEyta1T1Mf115nJd3YjstAAtpdsZ2afGbyFzvS dUMkySkLUSDVHJuKnQCaqOZRSQ96zVBc0m8SD5QNKUh0gj0voIMriUAuz_yh Xl7KadRx4g8PiL8z94CSeBhsa8P_zwHZvAhWpnXqT1YDndz1HCmYy94gFD0F xG2SmJD7GwGp_8wBAsYMCvQOH-n8keovGuFulosU2UivnVKnmL51mI7XMrpY F7n6pOactiFAilO27zqXI9iIKYA8ELwbdRyhD3yWXedqSOArEIcSc7HakyNc

Joomla Site Error: “Mark( ‘afterLoad’ ) : Null; /** * CREATE the APPLICATION…”

- | Comments

I happened to setup a Joomla site from other hosting (zipped Joomla files and mysqldump), and after the setup on our server I got the error (below) - not exactly an error but a contents of “index.php” file. I found few Joomla related discussion regarding this issue but there’s no solution. mark( 'afterLoad' ) : null; /** * CREATE THE APPLICATION * * NOTE : */ $mainframe =& JFactory::getApplication('site'); /** * INITIALISE THE APPLICATION * * NOTE : */ // set the ...... JDEBUG ? $_PROFILER->mark('afterRender') : null; $mainframe->triggerEvent('onAfterRender'); /** * RETURN THE RESPONSE */ echo JResponse::toString($mainframe->getCfg('gzip'));

I found the problem on “.htaccess” file as it contain a line “AddType x-mapp-php5 .php”. I commented out the line because I think it is related to parsing PHP files and this solved my problem. I searched for this code and it is related to servers with PHP 4 and 5 installed - details below from 1and1 hosting.

By default Apache uses PHP 4 for .php extension. If you don’t want to rename all your scripts to .php5 you can do the following: Create a .htaccess file and place the following line AddType x-mapp-php5 .php in it.

This will tell Apache to use PHP 5 instead of PHP 4 for the extension .php in the directory the .htaccess is placed and all sub-directories under it.

You can use AddHandler x-mapp-php5 .php as an alternative for or instead of AddType x-mapp-php5 .php

SSL Certificate Registration Now Includes “Www” Subdomain.

- | Comments

Good News! :)

I recently purchased SSL certificate (GeoTrust QuickSSL Premium) for our client domain (example.com) and I was surprised because it also registered the ”www” sub-domain (www.example.com). Before, you need to purchase both SSL or configure your website to redirect to either www or non-www.

I am not sure if this is available on other SSL type from GeoTrust and other SSL providers.

Free Alternative to InnoDB Hot Backup

- | Comments

I recently found out that there is a free alternative to InnoDB Hot Backup. For those of you using MySQL with the InnoDB plugin you probably know that MySQL does not provide a tool for making online non-blocking backups. InnoBase Oy, the makers of InnoDB, do provide a tool but it’s not free. In fact they charge around $600 per year per server.

The tool that I’m talking about is XtraBackup by Percona. This tool is originally meant to accompany the XtraDB storage engine which in itself is a patched version of InnoDB. XtraBackup will create online non-blocking backups for both XtraDB and InnoDB databases and best of all, it’s free.

For those of you who are not that familiar with MySQL backups, the standard way of doing backups is with mysqldump. This can be done with the database online but it blocks the tables it’s backing up which is not acceptable for production environments. It also takes a good amount of time to restore a mysqldump since it writes out everything as SQL statements which then have to be processed again. A binary copy is much faster to restore but commonly requires the server to be stopped. The best alternative is to create an LVM snapshot of the binary files but this requires LVM to be set up and enough disk space to perform the LVM snapshot. All in all it’s nice to have a free alternative although I have to add the footnote that I haven’t tested it on any decently sized database to check what the performance impact is.

Google Will Use Site Performance and Page Load Speed in SERP Ranking - Sysadmin SEO Here We Come

- | Comments

Page Load speed just became a lot more important - Google announced recently that it will use page speed in its SERP rankings. Here is a quote that will make the SEO and marketing folks knock on sysadmin doors:

We encourage you to start looking at your site’s speed (the tools above provide a great starting point) — not only to improve your ranking in search engines, but also to improve everyone’s experience on the Internet.

The post lists a number of tools everyone should be using already, such as YSlow, google’s own PageSpeed, online speed waterfall diagram tool and webmaster tools. Webmaster tools recently added a beta feature which provides data about your sites speed relative to other sites on the internet.

Here is a sample report:

google webmaster tools page load speed chart

Performance overview On average, pages in your site take 6.3 seconds to load (updated on Apr 11, 2010). This is slower than 83% of sites. These estimates are of medium accuracy (between 100 and 1000 data points). The chart below shows how your site’s average page load time has changed over the last few months. For your reference, it also shows the 20th percentile value across all sites, separating slow and fast load times.

Linus System admin blog will have a series on page speed improvement.  Stay tuned!

Enable/Disable APC on Virtual Host Level

- | Comments

APC (Alternative PHP Cache) is a free, open, and robust framework for caching and optimizing PHP intermediate code. APC is a great tool to speed up a php driven site and I can’t even think of a big site running on a php framework without an opcode cache (other good choices are eaccelerator or xcache). Why would not everyone want to use this? The reason why this is not enabled by default everywhere is because in certain situations it can break things. Most peoples will not see any problems, but still, if you run a server with many clients sharing the same apache service this might be a problem (as the apc module loading it is a server-wide config). This post will show how we can use APC globally and disable it for some vhosts (that might have a problem with using APC) or the reverse to just use it one a special vhost that might need this.

I’ll assume that you have installed apc already, if this is not the case this will probably be something as simple as running pecl install apc or downloading the archive from pecl and running: phpize; ./configure; make; make install

The APC extension needs to be enabled either in php.ini or in one included file with a line like this: extension=apc.so there are many other parameters that apc can be fine tuned (see the official doc for more info), but without any other change, just with this line apc will be enabled on all the vhosts on the server.

Disabling some vhosts from using APC - if we want to disable APC for a particular vhost we just have to add to the vhost config or to .htaccess: php_flag apc.cache_by_default Off

Enabling APC only on some vhosts - if we want to have APC disabled by default globally we will have in php.ini:

<code>extension=apc.so
[apc]
apc.cache_by_default=0 # disable by default
... other apc settings...</code>

and we will enable APC for the particular vhost config or using .htaccess using: php_flag apc.cache_by_default On

Hopefully you found this post useful and this will give you a reason to use APC with more confidence knowing that you have the granularity to disable/enable it as needed in a shared environment.