Recently, I needed to sync several directories on a backup / fail-over server with the same directories on a production server. Rsync over SSH takes care of this, but if you want to tighten security, you must use the “command” restriction in the SSH authorized_keys file — This restricts the authenticated key to running a single command, with a specific set of arguments. For example, let’s look at a typical command that might be run from a backup server to rsync daily database dumps:
backup$ rsync -av --delete -e "ssh -i $HOME/.ssh/prod-rsync-key" \
A few weeks ago I mentioned the wesley.pl script from GitHub to optimize images, and how I had modified it to keep (or discard) the EXIF / XMP information. Making sure images are as small as possible is important to save bandwidth and improve page load times (and google rank), so I think it’s worth discussing my image optimization process in more detail.
To improve page load times (and Google ranking), you should make sure all jpeg, png, and gif files are properly optimized. Instead of writing my own script for jpegtran, pngcrush, and gifsicle, I used Mike Brittain’s Wesley.pl script on GitHub. It works great, though I did have to modify it to change the “jpegtran -copy” parameter it uses — I need to keep the EXIF on larger files, and strip it from thumbnails. I posted the diff on the GitHub Issues page.
Update 2012-12-31 : In case Mike doesn’t merge my diff, with the addition of the
--copy=[all|comments|none] command-line argument (see my comment below for more info), you can download the patched wesley.pl script here instead.
I wrote a bash script this morning to report the size of WordPress cache folders, the number of files they contain, read each file to prime the OS disk cache, and optionally flush the OS disk cache as well. This might be a script you could execute to email a daily/weekly report of cache folder sizes, or perhaps execute during/after booting a server to prime the OS disk cache, or even on a regular schedule to make sure the OS cache is always primed. The script also has a “flush” argument to sync and drop the OS disk cache, which isn’t very useful (to me) except to see the difference in speed between a clean and primed cache (about 11s vs 0.4s for all websites on my server).
I recently added some disk caching for MySQL queries, WordPress objects, PHP opcode, and PHP web pages on my server. There are several different caching techniques and applications available, and memcached seems like one of the more popular ones. Right or wrong, it appears to be the default go-to for many developers these days.
Since I’m a SysAdmin by profession (with maybe a penchant for scripting and integration), I tend to have a more “systems” oriented approach — which led me to first consider, and then choose disk caching over memcached. In this post, I’ll outline the reasons I chose disk caching, and why in most circumstances it might be superior to memcached.
I had to update several reverse zone files today, so wrote a quick for-loop in bash to freeze and thaw all the zones. The script parsed the zone file names and reversed them into a proper
d.c.b.a.in-addr.arpa format. Later I tweaked it with sed to make it more flexible (in order to pass it a full or partial IP address), but ended up using
tac for the reversing part instead – that’s what it’s made for after all. And if you’re wondering what
tac stand for, just read
cat backwards. :-)
I recently wrote a notification script for Centreon / Nagios to create and update tickets in OTRS. The ticket details and OTRS connection settings are all defined on the command-line. The GenericTicketConnector.yml must first be installed in OTRS, and a user (aka “Agent”) created for the script. I used perl’s taint mode, so had to hard-code the various log file locations ($logfile, $csvfile, and $dbfile). The Log::Handler module allows the script to output and log different amounts of activity detail, and the DBD::SQLite module is used to keep a local database of the Ticket ID (from OTRS) and the Problem ID (from Centreon / Nagios) associations — so the OTRS ticket can be updated with follow-up notifications from Centreon / Nagios for the same issue.
A few years ago I was supporting a very diverse environment with Solaris, AIX, and Linux servers; some with password logins, public/private key authentication, and several with SecurID passwords. All accounts were local, passwords expired every three months, and the accounts locked after three failed logins — so you can imagine the mess this created if you didn’t go around every server at least every three months. After I’d accumulated about half a dozen passwords, I wrote an Expect script to login and change my password and wrapped it with a bash script to try every old password I had. Since some servers needed a SecurID number to login, the bash script would pause on those and prompt me for the token before continuing.