Okay, I’ll make this short, we have several servers with Ffmpeg and later we found out that one of them is having problem on video conversion as it does not add the video length to the output file. This issue was caused by older version of Flvtool2, version 1.0.5 RC6 and below. So I upgraded my installation to latest release which is 1.0.6 final (this is the latest version at the time of this post).
You can check for the latest release at RubyForge.
Upgrade process is easy, same as the installation process, and if you need help visit our detailed guide on “Installing ffmpeg and its components” and look for Flvtool2 section.
If you’re still having problem after Flvtool2 upgrade check your Ruby installation as you may need to update it as well. If you’re on a Cpanel server you can use Cpanel’s Ruby install script at /scripts/installruby”.
Beer money and open source ad network don’t really have anything to do with each other other than the fact that the sys admins of Promet Host will get counting on some buying some beer this summer with the [Ad Bard] earnings. That’s right, all proceeds this sprint will be used to buy us some beer.
Here is what the good folks at [Ad Bard] are all about:
At Ad Bard, we believe that advertisements can be an effective way for FLOSS oriented websites to generate regular income while remaining useful, relevant, and non-obnoxious.
Our advertising community is entirely built with free/libre and open source software, with all involved algorithms and schemas freely available for public scrutiny.
Its also worth to note that their site is running on the Drupal platform.
Linux System Admin Blog became an accepted member late last year and we plan on running some ads shortly. So when you see our ads, be encouraged that they are part of the FLOSS community and will do us some good by supplying the badly needed happy hour. Here is one right now…
This is a slight departure from our regular programming. Instead of just concentrating on the sys admin side of things I want to show how to add a Nagios check to an existing application. In this case we have a Java application for which we want to monitor whether it is running or not. Later on we can make this more detailed by monitoring error codes in the application but for the moment let’s keep it simple.
Configuring Nagios
On the Nagios end of things we need to define a command to perform a check on a specific port of the server where the application is running. Add a line like this to the objects/commands.cfg file of your Nagios installation.
1234
define command{
command_name check_your_application_name
command_line $USER1$/check_tcp -H $HOSTADDRESS$ -p $ARG1$ -e "This application is alive and well"
}
The -e parameter checks for a specific text that is to be returned by the application. This we can use later on to check for more detailed information. Next we need to add a service to Nagios for using this command. We do this by adding the following lines to the objects/localhost.cfg file. To keep this short I left out some lines which configure the frequency of the checks and the types of alerts.
123456
define service {
use generic-service
host_name your_server_name
service_description your_service_name
check_command check_your_application_name!2222
}
Creating a listener port in Java
In the second part I will show you the actual code to add to your application. Because this is a blog post I left out the package definition and the includes, but other than that the class itself is usable. To add the check to the Java app we need to add a listener thread to application. We do this by creating a class that is derived from Thread. This listener will open a port which is specified by the main application and a respond to any incoming data with a preset text. We really don’t care about the input on this end so any input will be ignored:
public class NagiosChecker extends Thread {
// Server socket
private ServerSocket srv;
// Flag that indicates whether the poller is running or not.
private boolean isRunning = true;
// Constructor.
public OVMChecker(ServerSocket srv) {
this.isRunning = true;
this.srv = srv;
}
// Method for terminating the listener
public void terminate() {
this.isRunning = false;
}
/**
* This method start the thread and performs all the operations.
*/
public void run() {
try {
// Wait for connection from client.
while (isRunning) {
Socket socket = srv.accept();
// Open a reader to receive (and ignore the input)
BufferedReader rd = new BufferedReader(new InputStreamReader(socket.getInputStream()));
// Write the status message to the outputstream
try {
BufferedWriter wr = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
wr.write("This application is alive and well");
wr.flush();
} catch (IOException e) {
System.out.println(e.getMessage()));
}
// Close the inputstream since we really don't care about it
rd.close();
} catch (Exception e) {
System.out.println(e.getMessage()));
}
}
}
In case you’re still reading this you’re probably interested in how to call this class. The following code should be executed in the initialization of the application. It creates the actual socket for port 2222 and starts the listener class. After this the listener class will run indefinitely until the application terminates.
Linux systems traditionally keep the definition of their timezone in /etc/localtime. This is a binary file with the timezone info, and if we want to change it we need to find the appropriate timezone file from /usr/share/zoneinfo and copy it over the one from /etc or just link to it. Once you change it, you will need to restart any daemons or applications that use the timezone as they might still use the old one.
Still on RHEL/Centos based systems this is not enough, and even if apparently all seems to work as expected, there might be some applications still using the old timezone. This is happening if they read the timezone definition from the rhel specific file: /etc/sysconfig/clockcat /etc/sysconfig/clock
ZONE="America/Chicago"
UTC=true
ARC=false
We also need to update the ZONE field in /etc/sysconfig/clock to be sure that all occurrences of the old timezone are changed and everything on the system will use the new setting.
Note: you don’t need to restart the system to activate this change, but you will have to restart the applications using the timezone so they can read the updated information.
It’s a bit of a long title for a blog post but the point I want to make is that not every query optimization is aimed at making the query faster. As a case in point we have a client that has a web shop and their network traffic between the web servers and the database servers has been sizable to say the least. There was a good amount of old code that probably worked pretty well when the shop just started out and had small amounts of data. Now the shop has grown certain queries suddenly don’t perform so good anymore.
On any weekday around lunch time network traffic between web and data servers was around 80Mb/s. On a 100Mb/s connection that is dangerously high and to address this we’ve been in the process of modifying queries that do a SELECT *. There really hardly ever is a reason to do a SELECT * except when you have very flexible code that automatically deals with extra columns. That was not the case with this application.
On Cyber Monday we found that traffic for this shop was touching 100Mb/s and web site performance really suffered. As a temporary measure we switched the database server to a 1Gb/s connection but both web servers stayed on 100Mb/s connections.
Looking at the slow query log and mtop revealed a ton of similar queries:
123
# User@Host: username[username] @ [10.0.0.123]
# Query_time: 2 Lock_time: 0 Rows_sent: 1948 Rows_examined: 2047
SELECT * FROM sites, sites_to_bundle WHERE sites_to_bundle.sites_id = sites.sites_id AND sites.site_name ='shop1';
In and by them selves these queries look pretty harmless. 2000 rows is not that much. Maybe more than is needed but not enough to choke up the network either. The problem turns out to be that this query was executed several times for each page in the check out process. The database server was not overloaded since it had the whole result set in its cache, but it had to transfer 2MB of data for each request. When a developer investigated it turned out that only one 2 digit value of the entire result set was used.
We rewrote query and pushed to production on that same day. Yes, I know that a previous blog post tells us not to do that but in this case I’m glad we did. Network traffic dropped below 10Mb/s and the web site was flying. Below is the Cacti graph that shows that difference. Shortly after 15:00 we implemented the optimization and traffic dropped dramatically.
While investigating some ftp transfers issues we realized that there was something wrong with the logs generated by vsftpd. The timestamps reported in the vsftpd log were wrong, and the fact that they were always 5 hours behind the actual time made us think this was caused by a timezone issue. The system running this was the latest Centos 5.2 with stock vsftpd package.
After further investigation we realized that vsftpd was not using the system timezone settings but was always logging its messages using GMT. Why would anyone want this? I have no idea, but imo they should change the default value to use the system timezone as that is what the majority of people would expect. In order to fix this, you just have to add to the vsftpd.conf the following line:
use_localtime=YES
as we can see from vsftpd manual page (man vsftpd.conf) if undefined, this defaults to ”NO”:
”use_localtime - If enabled, vsftpd will display directory listings with the time in your local time zone. The default is to display GMT. The times returned by the MDTM FTP command are also affected by this option.
Default: NO”
After changing this variable as with any other vsftpd options, you have to restart the vsftpd service to activate the change.
When I started working in systems, one of my first client was a major bank. Yes, this was back in the mainframe batch processing days. They never did any system updates when they ran the month end, quarter end and especially year end.
I always thought that they just weren’t confident in their system folks and scoffed at this policy as it always made our deadlines shorter.
I think this story convinced me that doing production work these days on the bussiest web days is not a good idea. Maybe microsoft should have borrowed a page from the mainframe policy manual - don’t do system updates on black monday or black friday as it may cause system outage.
For Internet users, Black Friday was supposed to be about buying and cashing back, but Microsoft’s Live Search cashback machine apparently broke down just as customers “barged in” to make some early morning purchases.
According to a blog posting, the unexpected outage occurred due to a significant spike in traffic, which caused the system to .html”>Buy Propecia go down for several hours. It took quite a while for it to come back to life, but apparently that was related to investigating the issue and rebuilding and deploying the databases and indexes that support Microsoft Live Search Cashback.
The Domain Name System (DNS) is responsible for translating host names to IP addresses (and vice versa) and is critical for the normal operation of internet-connected systems. DNS cache poisoning (sometimes referred to as cache pollution) is an attack technique that allows an attacker to introduce forged DNS information into the cache of a caching nameserver. DNS cache poisoning is not a new concept; in fact, there are published articles that describe a number of inherent deficiencies in the DNS protocol and defects in common DNS implementations that facilitate DNS cache poisoning.
Fixed source port for generating queries – in most dns implementations the source port for outgoing queries is fixed at the traditional assigned DNS server port number, 53/udp.
We can easily find out if our own dns server is using a fixed source port for queries by looking into named.conf and if we see a line like:
query-source port 53;this means that the port 53 udp will be used for all dns outgoing queries.
This can be tested externally (you can check on your ISP resolvers for ex.) with the dig command:
dig +short @<IP_DNS_SERVER> porttest.dns-oarc.net txt
Here is a sample output for a server not using source port randomization:
dig +short @192.168.0.1 porttest.dns-oarc.net txt
porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
"192.168.0.1 is **POOR**: 26 queries in 2.0 seconds from 1 ports with std dev 0"
and also one for a server that does this:
dig +short @192.168.0.2 porttest.dns-oarc.net txt
porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
"192.168.0.2 is **GREAT**: 26 queries in 1.2 seconds from 26 ports with std dev 5243"
as this shows it is using random ports for each query.
Now if you want to be safe from this vulnerability you should upgrade to the latest bind version available (yum install bind if using rhel/centos/etc. or apt-get install bind9 if you use debian/ubuntu) and also remove from your named.conf such lines:
query-source port 53;
query-source-v6 port 53;
and reload named afterwards:
rndc reload
Once you are done check it again with dig and this should show now ’GREAT’ as expected.
E-commercre Report; Several of the most popular online merchants have struggled to cope with the heavy holiday traffic, a survey shows.
Ok, that was NYTimes 2003 – what about now? This season there have already been several announcements of ecommerce snafus. Sears.com site shut down for a couple of hours and there have even been rumors of apple.com having problems.
Although Snopes.com busts the claim of Cyber Monday as the busiest shopping day dollar wise, over the next five weeks eCommerce sites will experience their heaviest loads.
One interesting phenomenon that has emerged in the last few years is a sort of blending of the Black Friday and Cyber Monday concepts: The day that produces the most web traffic to online retail sites is Thanksgiving Day itself, as avid shoppers use the Internet to plan their strategies for Black Friday weekend sales at brick and mortar stores:
Matt Tahtam, a spokesman fo rHitwise, as company that tracks 100 of the largest online retailers, says there another trend that’s emerged over the last few holiday seasons: the gratest amount of online traffic (searching and visiting, though not necessairly buying) happening on turkey day itself.
What is the biggest E-Shopping Day in History? According to this Forbes article (http://www.forbes.com/2006/11/29/cyber-monday-shopping-record-biz-cx_tvr_1129shop.html) :by tagging it as the unofficial start of the online holiday shopping season, saw a 26% gain in sales from the same day in 2005 to $608 million, according to industry tracker comScore Networks. The result, beating expectations, marked the single biggest shopping day in e-commerce history.
So what should a pragmatic system admin do about this?
We don’t over sell your servers
Dont run bandwidth, CPU or any aspect of hosting at or near capacity
Monitor our servers and distribute load to make sure that when the big spike happens, we’re ready and your site stays up.
Recently, Cpanel implemented their standard way of adding custom changes to virtualhost configuration to preserve custom changes after an upgrade or Apache rebuild.
Here are the two common situations of adding custom changes:
1.) Changes added inside a <VirtualHost>
This is very simple as we only need to create a single file that will contain our changes. But we need to understand the correct location on where to place this file so that our changes will be read properly.
There are several cases of adding these changes and below are some samples and the coresponding directory where to put them:
Apache 1/2 - all SSL: /usr/local/apache/conf/userdata/ssl/<filename>.conf
Apache 1/2 - all Standard: /usr/local/apache/conf/userdata/std/<filename>.conf
If you need to put the above changes on a specific Apache version you can put them this way:
Apache 1 - all SSL: /usr/local/apache/conf/userdata/ssl/1/<filename>.conf
Apache 2 - all Standard: /usr/local/apache/conf/userdata/std/2/<filename>.conf
The same process is followed on subdomains, like on one of my implementation i added a custom virtualhost in a subdomain to take effect on standard (http), so i put it on this directory:
/usr/local/apache/conf/userdata/ssl/2/gerold/mysubdomain.gerold.com/custom.conf
Take note that you also need to create the directories like ssl, std, 1, 2, or mysubdomain.gerold.com” in order to have the correct directory structure/path.
You can verify if your custom changes were added correctly using this command:
1
/scripts/verify_vhost_includes
Then, update the include files:
For changes concerning single account/user you can use this command:
1
/scripts/ensure_vhost_includes --user=<username>
And for all users run:
1
/scripts/ensure_vhost_includes --all-users
And finally, restart Apache (/etc/init.d/httpd restart)
2.) Changes added outside a <VirtualHost>
Adding custom changes outside of virtualhost can be done in different ways, like creating a templates or using the include editor.
On my example, i will discuss using Include editor as i usually used this on some of our client sites.
Cpanel have three ways to place our custom changes using Include editor, these are:
Pre-Main Include - this is placed at the top of the httpd.conf file
Location: /etc/httpd/conf/includes/pre_main_1.conf
Pre-VirtualHost Include - codes in this file are added before the first Vhost configuration
Location: /etc/httpd/conf/includes/pre_virtualhost_1.conf
Post-VirtualHost Include - codes in this file are added at the end of httpd.conf
Location: /etc/httpd/conf/includes/post_virtualhost_1.conf
So to add our changes we can go to WHM: Main >> Service Configuration >> Apache Setup >> Include Editor, and select where you want to place your custom changes (pre-main, pre-vhost, or post-vhost).
You can also edit directly the files (pre_main_1.conf, pre_virtualhost_1.conf, post_virtualhost_1.conf) located at /etc/httpd/conf/includes/.
Finally, restart Apache (/etc/init.d/httpd restart) for changes to take effect.
NOTE: For complete referrence please refer to Cpanel Docs.