There are a few functions available to retrieve the URL and size of an image in the WordPress Media Library, but few people know that these functions will often lie about an image’s dimensions.
As an example, let’s define a custom image size of 1000 x 1000 cropped, and use
image_downsize() to retrieve the URL and size of an image ID (this example can be used with
wp_get_attachment_image_src() as well). I’ll use
list() in the example, instead of an array variable for the return values, to keep the code more readable. ;-)
add_image_size( 'my-custom-size', 1000, 1000, true );
list( $img_url, $img_width, $img_height, $img_is_intermediate ) = image_downsize( $id, 'my-custom-size' );
WordPress will return a URL for the image and the values of $width / $height may be 1000 / 1000 – it all depends on several factors.
I needed to assign the same featured image to the children and grand-children of top-level pages, so wrote this plugin to hook into the ‘get_post_metadata’ WordPress filter and assign them dynamically. The plugin should be available on WordPress.org shortly. Meanwhile, you can download it here.
Continuing the earlier theme of Optimizing Images to Save Bandwidth and Speed Page Load, you can also encode small (background) images directly in your stylesheets. For each image / page element encoded within a stylesheet, it means one less HTTP connection for content, which in turn means pages finish loading faster. These images should generally be small and downloadable quickly — what you want to save is the HTTP connection overhead, not the download time (both images and stylesheets are generally cached after downloading). The images should also be encoded within sourced stylesheet files, so the stylesheet files can be cached by the browser. If you encode images within your content (using
<style></style> tags for example), the encoded image will have to be downloaded for every page view, so although you’re saving HTTP connections, your page size has increased. By encoding images in sourced stylesheet files instead, the browser (and content delivery services) can cache the whole stylesheet, including the encoded image(s).
A few weeks ago I mentioned the wesley.pl script from GitHub to optimize images, and how I had modified it to keep (or discard) the EXIF / XMP information. Making sure images are as small as possible is important to save bandwidth and improve page load times (and google rank), so I think it’s worth discussing my image optimization process in more detail.
To improve page load times (and Google ranking), you should make sure all jpeg, png, and gif files are properly optimized. Instead of writing my own script for jpegtran, pngcrush, and gifsicle, I used Mike Brittain’s Wesley.pl script on GitHub. It works great, though I did have to modify it to change the “jpegtran -copy” parameter it uses — I need to keep the EXIF on larger files, and strip it from thumbnails. I posted the diff on the GitHub Issues page.
Update 2012-12-31 : In case Mike doesn’t merge my diff, with the addition of the
--copy=[all|comments|none] command-line argument (see my comment bellow for more info), you can download the patched wesley.pl script here instead.