I’ve been looking for ways to improve the performance of my websites — several of them using PHP (WordPress), a few using mod_perl, and all running under the same Apache Httpd web server. The first thing I did was to sync all the static content to a Virtual Machine (VM) off-site, and redirect all requests for that content to the VM. Using redirects has several advantages, that (to me) outweigh the small disadvantage that all requests for static content still come to my web server, which then responds with a permanent redirect. The upside (to using a perl based rewrite map, for example) is that any change to the static content is immediately apparent to new visitors — excluding the usual browser cache issues. ;-)
I also wrote a script to check the Apache Httpd MPM config limits (prefork, worker, and event) and issue a warning or error message if those limits were set too high for the server’s memory. Writing this script made me realize just how much more memory each process was using since adding those PHP websites. Switching from the prefork MPM to the worker or event MPMs (where each process manages multiple threads / connections instead of just one) could have been one possible solution, but using PHP in a threaded environment is highly discouraged by the PHP team.
A little while ago I had to reboot a client’s VM because the web server forked too many processes. They were making use of PHP, but the web server had not been configured for the resulting larger process size. I searched for a tool that would analyze the size of running httpd processes, and project the impact of starting the maximum number of processes allowed by MaxClients or ServerLimit, but didn’t find anything, so ended-up writing my own.
The following check_httpd_limits.pl script compares the size of running Apache httpd processes, the configured prefork/worker/event MPM limits, and the server’s available memory. The script exits with a warning or error message if the configured limits exceed the server’s available memory.
check_httpd_limits.pl does not use any 3rd-party perl modules, unless the
--save/days/max command-line options are used, in which case you will need to have the DBD::SQLite module installed. It should work on any UNIX server that provides /proc/meminfo, /proc/*/exe, /proc/*/stat, and /proc/*/statm files. You will probably have to run the script as root for it to read the /proc/*/exe symbolic links.
Sometimes I’ll work on something just to see what it looks like when it’s done. I guess this Apache rewrite might be something like that — I wanted to change the WordPress search query from
/s/value, just to make the URL look a little prettier. :) There are probably a few ways to do this, and if you’d like to share some alternatives, feel free to post a comment.
There are two parts to this problem; The first, executing a search query from an
/s/value URL, is easily addressed by a rewrite and proxy command. The second problem — how to rewrite a regular search query, but not a proxied search query — is a little tricker. I decided to add an htproxy hostname to my domain with an IP of 127.0.0.1. Then in a rewrite condition, I check for the htproxy hostname, and skip the rewrite if it’s a proxied request. The htproxy hostname must be included in the website’s Apache config as a
Content Delivery Networks (CDN) have become very popular in the past several years. They offer an easy way to save bandwidth and bring content physically closer to end-users. CDNs offer a variety of services, though pricing and features are usually tailored to larger content providers. As a smaller provider myself with only an ADSL line to host my personal websites — and as a SysAdmin who prefers to host his own content — I decided to mirror my static content, and redirect traffic as I needed. The following describes a solution to keeping all of my content local, yet mirroring the static content for faster delivery.