Debian Etch has been discontinued for a while now, and in an ideal world everyone has upgraded to lenny a long time ago. Still, this is not always possible and there are some systems out there that are still running etch, some for a good reason, some just because their admins are lazy ;) . Recently we worked on a project with such a system that was still running etch, and the devs on the team told us that all is working perfect but they are no longer able to install new packages using apt tools. Hmm… let’s see.
And indeed running apt update was giving a 404 error, as the etch files are no longer in the main ftp archive (on ftp.debian.org):
123456789101112131415161718
apt-get update
Ign http://ftp.debian.org etch Release.gpg
Ign http://ftp.debian.org etch Release
Ign http://ftp.debian.org etch/main Packages
Ign http://ftp.debian.org etch/non-free Packages
Ign http://ftp.debian.org etch/contrib Packages
Err http://ftp.debian.org etch/main Packages
404 Not Found [IP: 130.89.149.226 80]
Err http://ftp.debian.org etch/non-free Packages
404 Not Found [IP: 130.89.149.226 80]
Err http://ftp.debian.org etch/contrib Packages
404 Not Found [IP: 130.89.149.226 80]
Fetched 4B in 1s (3B/s)
Reading package lists... Done
W: Couldn't stat source package list http://ftp.debian.org etch/main Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_main_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.debian.org etch/non-free Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_non-free_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.debian.org etch/contrib Packages (/var/lib/apt/lists/ftp.debian.org_debian_dists_etch_contrib_binary-i386_Packages) - stat (2 No such file or directory)
W: You may want to run apt-get update to correct these problems
and the apt sources line causing this error was (from /etc/apt/sources.list):
deb http://ftp.debian.org/debian/ etch main non-free contrib
We could not upgrade the machine because of internal constrains, and this was not even the scope of our project, but we needed to install some new debian packages we had to point the apt sources to a new place and this is to the archive.debian.org that continues (and will continue) to have the etch files. Basically our new apt sources became:
deb http://archive.debian.org/debian/ etch main non-free contrib
and this made it possible to complete our project and install the needed libraries. A few weeks after we finished the project we were hired for a new project to perform the upgrade to lenny, but this is a different storry.
I hope you found this post useful, in case for some reason you are still running etch and need to find a proper etch mirror to install new softwares as needed. Of course I would urge you to upgrade to lenny, or even to squeeze if possible, as etch is no longer supported, you have no longer security patches, etc.
Here is a quick tip on how to remove a list of files. Let’s say you have the list of files inside a file called files_to_remove. Usually I would do something like this:
LIST=\`cat files_to_remove\`
and then
ls -al $LIST
just to check what is in the list and if it looks good.
While working on setting up a backup script for a subversion repository I encountred an interesting problem. I’ve done this before many times, on different repos, and haven’t seen any issues, but in this case the backup command that is using the built-in svnadmin hotcopy command was failing with this error:
12
svnadmin hotcopy --clean-logs /svn/repo/ /backup/repo/
svnadmin: Can't open file '/svn/repo/db/fsfs.conf': No such file or directory
Hmm… looking at the respective path I can see that the command is not lying and that file fsfs.conf is indeed not present. I could find a file fs-type but not fsfs.conf. So my only assumption was that this is an older repository created with an older svn version than the one we were running currently. Checking the existing svn version I got:
and the fact that this repo was very old (~2009) made my assumption sound correct. Ok, now what? Well in this situation my first thought was to use the svnadmin upgrade command; from the manual it looked like this is what I needed to fix this issue:
“svnadmin help upgrade
upgrade: usage: svnadmin upgrade REPOS_PATH
Upgrade the repository located at REPOS_PATH to the latest supported
schema version.
This functionality is provided as a convenience for repository
administrators who wish to make use of new Subversion functionality
without having to undertake a potentially costly full repository dump
and load operation. As such, the upgrade performs only the minimum
amount of work needed to accomplish this while still maintaining the
integrity of the repository. It does not guarantee the most optimized
repository state as a dump and subsequent load would.”
After I’ve made a manual backup archive of the repo (a simple tar.gz of the repo folder) I ran the upgrade command sure this is going to fix my issue:
1
svnadmin upgrade /svn/repo/
and after it completed, I verified that svn was still working as expected and checked for the fsfs.conf file. But that was not created… Hmm… Let’s try the hotcopy command anyway:
12
svnadmin hotcopy --clean-logs /svn/repo/ /tmp/repo/
svnadmin: Can't open file '/svn/repo/db/fsfs.conf': No such file or directory
the exact same error.
Trying to understand what the fsfs.conf file contains I just created a new repository to see if it gets created. Indeed my v1.6.11 of svn created the file for a new repo, and after copying it to the location of my existing repository (as it was basically just an empty file) my issue was fixed and the hotcopy command started working. Here is the content of the file as created by my svn version, that I copied in the older repo to fix this problem:
<code>cat fsfs.conf
### This file controls the configuration of the FSFS filesystem.
[memcached-servers]
### These options name memcached servers used to cache internal FSFS
### data. See http://www.danga.com/memcached/ for more information on
### memcached. To use memcached with FSFS, run one or more memcached
### servers, and specify each of them as an option like so:
# first-server = 127.0.0.1:11211
# remote-memcached = mymemcached.corp.example.com:11212
### The option name is ignored; the value is of the form HOST:PORT.
### memcached servers can be shared between multiple repositories;
### however, if you do this, you *must* ensure that repositories have
### distinct UUIDs and paths, or else cached data from one repository
### might be used by another accidentally. Note also that memcached has
### no authentication for reads or writes, so you must ensure that your
### memcached servers are only accessible by trusted users.
[caches]
### When a cache-related error occurs, normally Subversion ignores it
### and continues, logging an error if the server is appropriately
### configured (and ignoring it with file:// access). To make
### Subversion never ignore cache errors, uncomment this line.
# fail-stop = true
[rep-sharing]
### To conserve space, the filesystem can optionally avoid storing
### duplicate representations. This comes at a slight cost in performace,
### as maintaining a database of shared representations can increase
### commit times. The space savings are dependent upon the size of the
### repository, the number of objects it contains and the amount of
### duplication between them, usually a function of the branching and
### merging process.
###
### The following parameter enables rep-sharing in the repository. It can
### be switched on and off at will, but for best space-saving results
### should be enabled consistently over the life of the repository.
# enable-rep-sharing = false</code>
Hopefully this will help others seeing the same issue I was experiencing.
I happened to setup a Joomla site from other hosting (zipped Joomla files and mysqldump), and after the setup on our server I got the error (below) - not exactly an error but a contents of “index.php” file. I found few Joomla related discussion regarding this issue but there’s no solution.
mark( 'afterLoad' ) : null; /** * CREATE THE APPLICATION * * NOTE : */ $mainframe =& JFactory::getApplication('site'); /** * INITIALISE THE APPLICATION * * NOTE : */ // set the......JDEBUG ? $_PROFILER->mark('afterRender') : null; $mainframe->triggerEvent('onAfterRender'); /** * RETURN THE RESPONSE */ echo JResponse::toString($mainframe->getCfg('gzip'));
I found the problem on “.htaccess” file as it contain a line “AddType x-mapp-php5 .php”. I commented out the line because I think it is related to parsing PHP files and this solved my problem. I searched for this code and it is related to servers with PHP 4 and 5 installed - details below from 1and1 hosting.
By default Apache uses PHP 4 for .php extension. If you don’t want to rename all your
scripts to .php5 you can do the following:
Create a .htaccess file and place the following line AddType x-mapp-php5 .php in it.
This will tell Apache to use PHP 5 instead of PHP 4 for the extension .php in the
directory the .htaccess is placed and all sub-directories under it.
You can use AddHandler x-mapp-php5 .php as an alternative for or instead of
AddType x-mapp-php5 .php
I recently purchased SSL certificate (GeoTrust QuickSSL Premium) for our client domain (example.com) and I was surprised because it also registered the ”www” sub-domain (www.example.com). Before, you need to purchase both SSL or configure your website to redirect to either www or non-www.
I am not sure if this is available on other SSL type from GeoTrust and other SSL providers.
I recently found out that there is a free alternative to InnoDB Hot Backup. For those of you using MySQL with the InnoDB plugin you probably know that MySQL does not provide a tool for making online non-blocking backups. InnoBase Oy, the makers of InnoDB, do provide a tool but it’s not free. In fact they charge around $600 per year per server.
The tool that I’m talking about is XtraBackup by Percona. This tool is originally meant to accompany the XtraDB storage engine which in itself is a patched version of InnoDB. XtraBackup will create online non-blocking backups for both XtraDB and InnoDB databases and best of all, it’s free.
For those of you who are not that familiar with MySQL backups, the standard way of doing backups is with mysqldump. This can be done with the database online but it blocks the tables it’s backing up which is not acceptable for production environments. It also takes a good amount of time to restore a mysqldump since it writes out everything as SQL statements which then have to be processed again. A binary copy is much faster to restore but commonly requires the server to be stopped. The best alternative is to create an LVM snapshot of the binary files but this requires LVM to be set up and enough disk space to perform the LVM snapshot. All in all it’s nice to have a free alternative although I have to add the footnote that I haven’t tested it on any decently sized database to check what the performance impact is.
Page Load speed just became a lot more important - Google announced recently that it will use page speed in its SERP rankings. Here is a quote that will make the SEO and marketing folks knock on sysadmin doors:
We encourage you to start looking at your site’s speed (the tools above provide a great starting point) — not only to improve your ranking in search engines, but also to improve everyone’s experience on the Internet.
The post lists a number of tools everyone should be using already, such as YSlow, google’s own PageSpeed, online speed waterfall diagram tool and webmaster tools. Webmaster tools recently added a beta feature which provides data about your sites speed relative to other sites on the internet.
Here is a sample report:
Performance overview
On average, pages in your site take 6.3 seconds to load (updated on Apr 11, 2010). This is slower than 83% of sites. These estimates are of medium accuracy (between 100 and 1000 data points). The chart below shows how your site’s average page load time has changed over the last few months. For your reference, it also shows the 20th percentile value across all sites, separating slow and fast load times.
Linus System admin blog will have a series on page speed improvement. Stay tuned!
APC (Alternative PHP Cache) is a free, open, and robust framework for caching and optimizing PHP intermediate code. APC is a great tool to speed up a php driven site and I can’t even think of a big site running on a php framework without an opcode cache (other good choices are eaccelerator or xcache). Why would not everyone want to use this? The reason why this is not enabled by default everywhere is because in certain situations it can break things. Most peoples will not see any problems, but still, if you run a server with many clients sharing the same apache service this might be a problem (as the apc module loading it is a server-wide config). This post will show how we can use APC globally and disable it for some vhosts (that might have a problem with using APC) or the reverse to just use it one a special vhost that might need this.
I’ll assume that you have installed apc already, if this is not the case this will probably be something as simple as running
pecl install apc
or downloading the archive from pecl and running:
phpize; ./configure; make; make install
The APC extension needs to be enabled either in php.ini or in one included file with a line like this:
extension=apc.so
there are many other parameters that apc can be fine tuned (see the official doc for more info), but without any other change, just with this line apc will be enabled on all the vhosts on the server.
Disabling some vhosts from using APC
- if we want to disable APC for a particular vhost we just have to add to the vhost config or to .htaccess:
php_flag apc.cache_by_default Off
Enabling APC only on some vhosts
- if we want to have APC disabled by default globally we will have in php.ini:
<code>extension=apc.so
[apc]
apc.cache_by_default=0 # disable by default
... other apc settings...</code>
and we will enable APC for the particular vhost config or using .htaccess using:
php_flag apc.cache_by_default On
Hopefully you found this post useful and this will give you a reason to use APC with more confidence knowing that you have the granularity to disable/enable it as needed in a shared environment.