I’ve been looking for ways to improve the performance of my websites — several of them using PHP (WordPress), a few using mod_perl, and all running under the same Apache Httpd web server. The first thing I did was to sync all the static content to a Virtual Machine (VM) off-site, and redirect all requests for that content to the VM. Using redirects has several advantages, that (to me) outweigh the small disadvantage that all requests for static content still come to my web server, which then responds with a permanent redirect. The upside (to using a perl based rewrite map, for example) is that any change to the static content is immediately apparent to new visitors — excluding the usual browser cache issues. ;-)
I also wrote a script to check the Apache Httpd MPM config limits (prefork, worker, and event) and issue a warning or error message if those limits were set too high for the server’s memory. Writing this script made me realize just how much more memory each process was using since adding those PHP websites. Switching from the prefork MPM to the worker or event MPMs (where each process manages multiple threads / connections instead of just one) could have been one possible solution, but using PHP in a threaded environment is highly discouraged by the PHP team.
The Alternative PHP Cache (APC)
After running some performance tests recently, I decided to install and configure PHP’s Alternative PHP Cache (APC) to improve the response time of my PHP-based websites. APC is a free and open opcode cache for PHP — instead of compiling the PHP script for each request, an opcode cache keeps the precompiled code in memory, making it available to all httpd processes as shared memory. And this is where things get interesting… Most of the information I’ve seen on PHP opcode caches focus on the performance aspect — having the opcode precompiled and ready — but the other benefit I’ve found is that process sizes are actually reduced . This may not be apparent at first, if you don’t consider the shared memory used by each process. For example, before activating APC, my httpd process size was about 100 MB in memory (RSS), with about 20 MB of that being shared (SHR) between all httpd processes. This means the actual RAM used by the httpd process was about 80 MB, not 100 MB. After installing APC, the process size in memory (RSS) has increased to about 120-140 MB, but the shared part of that memory has increased even more — it’s now about 50-60% of the RSS size. This means the actual / real process size has gone from 80 MB down to about 60 MB or less!
As a example, say 50 processes of 80 MB each, plus the 20 MB of shared memory, makes for a total of 4020 MB. After installing APC, the same 50 processes are now about 60 MB each, plus the 80 MB of shared memory, makes for a total of 3080 MB.
These numbers reflect the general size of httpd processes on my own server — process sizes change over time, especially if you use MaxRequestsPerChild, and will be different on other web servers. After installing APC, I can now run about 20% more httpd processes on the same server with APC than without.
To get an idea of the difference in memory usage with APC, you should calculate the real process size (RSS minus the SHR size) or run check_httpd_limits.pl (available on Google Code as well) before and after enabling APC.
I started with a 256 MB opcode cache, and installed the apc.php script (from the APC package) to monitor cache usage. The cache is currently 92% used, with a 99.9% hit rate (as the screenshot above shows). My APC settings in /etc/php.ini are:
To keep an eye on my httpd process sizes, I run the following cronjob every hour:
# check httpd process sizes
0 * * * * /usr/local/bin/check_httpd_limits.pl --save --days=14 --maxavg --swappct=25 | grep -v '^OK:'
The check_httpd_limits.pl script saves the current RSS and SHR averages to an SQLite file, keeps a history of 14 days, bases it’s memory use projections on the largest averages found (either current or from the SQLite database), and allows up to 25% of swap usage before issuing a WARNING or ERROR message.
Since PHP doesn’t work well in a multi-threaded environment (like Apache Httpd’s worker or event MPM), FastCGI is often mentioned as a possible solution. By using FastCGI with PHP, mod_php can be removed from Apache Httpd, allowing it to run multi-threaded. A multi-thread web server can handle many more requests (for the same number of running processes) than single-threaded (prefork) processes can. In theory this means having a smaller, faster, multi-threaded web server that can handle many more requests for static content, and use FastCGI only for dynamic PHP pages.
Since all of my static content is hosted somewhere else, my Apache Httpd server is only responsible for redirects and delivering dynamic content (mod_php and mod_perl). I’m not convinced that moving PHP to another application would provide any noticeable performance or memory gains in my case.
Another method to off-load a web server is to run a lightweight “front-end” web server like nginx. It can deliver all the static content directly, and proxy the dynamic content requests to a “back-end” web server like Apache Httpd. Again, if you’re using a CDN for static content, the benefits might be minimal, unless the “front-end” web server can also handle redirects and cache the dynamic content. This might be something I will look into in the future.
If you’re interested in FastCGI, Brandon Turner has written a fairly extensive article on using FastCGI with a PHP APC Opcode Cache.