Still need guidance getting your site ready for Gutenberg?
On Tuesday, June 5th at 10 am PT, I’m heading up a webinar with Pantheon to cover how you can get your site ready. More specifically, we’ll cover:
But wait, there’s more! The webinar is just one in a series of five, including Mel Choyce and Josh Pollock. You should sign up so Tessa and I aren’t terribly lonely on June 5th.
Musings on the WordPress REST API. I did a webinar (gag) with Pantheon covering real world examples, writing custom endpoints, and including test coverage. Turned out surprisingly well, if I do say so myself.
Happy to announce the latest and greatest of WP LCache, [v0.5.0](https://github.com/lcache/wp-lcache/releases/tag/v0.5.0).
This release focuses predominantly on splitting WordPress’ alloptions cache into separate cache keys to mitigate cache pollution caused by race conditions. [Read #31245](https://core.trac.wordpress.org/ticket/31245) for all of the gory details. Thanks to [Ryan](https://twitter.com/rmccue) and [Joe](https://twitter.com/joe_hoyle) for paving the way.
If you haven’t thought about WP LCache since the [v0.1.0 announcement](https://handbuilt.co/2016/09/08/introducing-wp-lcache-v0-1-0/), you’re missing out. WP LCache is faster than other object cache implementations because:
* By using APCu, which is in-memory, WP LCache uses the fastest possible persistent object cache backend and avoids costly network connections on every request. When using a Memcached or Redis-based persistent object cache where Memcached or Redis is on a different machine, the millisecond cost of each cache hit can add up to seconds of network transactions on every request.
* By incorporating a common L2 cache, WP LCache synchronizes cache data between multiple web nodes. Cache updates or deletes on one node are then applied to all other nodes. Without this synchronization behavior, APCu can’t be used in server configurations with multiple web nodes because the cache pool is local to the machine.
Still not convinced? WP LCache has a variety of features no one else has, including native handling of cache groups, meaning you can delete an entire group of keys with `wp_cache_delete_group( $group );`.
Props again to [David Strauss](https://twitter.com/davidstrauss) for his ongoing work on the LCache library, and to [Pantheon](https://pantheon.io/) for sponsoring open source infrastructure.
Install, activate and enable with `wp plugin install –activate wp-lcache && wp lcache enable`.
WP Redis v0.6.0: new WP-CLI commands, precise hash groups. Install, activate and enable with
wp plugin install --activate wp-redis && wp redis enable.
[WP LCache](https://wordpress.org/plugins/wp-lcache/) is a new persistent object cache drop-in for WordPress that I’ve been working on with the folks at [Pantheon](https://pantheon.io/).
“But why another object cache drop-in?” you might ask. Because it’s faster than [the others](https://handbuilt.co/2016/04/26/a-persistent-object-cache-is-not-a-panacea/), of course.
Under the hood, WP LCache uses [LCache](https://github.com/lcache/lcache), a library that applies the tiered caching model of multi-core processors (with local L1 and central L2 caches) to web applications. In this particular configuration, APCu is the L1 cache and the database is the L2 cache.
APCu is the fastest persistent cache backend you can use, because it exists in PHP memory. However, APCu traditionally can’t be used on multiple web nodes because each node represents a different cache pool. Because WP LCache has a database-based L2 cache, a cache update or delete on one node is then automatically synchronized to all other nodes.
Props to [David Strauss](https://twitter.com/davidstrauss) for his hard work on the LCache library. Thanks to [Steve Persch](https://twitter.com/stevector) and [Josh Koenig](https://twitter.com/outlandishjosh) for their help with the WordPress implementation.
Feeling adventurous? Install WP LCache from [WordPress.org](https://wordpress.org/plugins/wp-lcache/) or [Github](https://github.com/lcache/wp-lcache), and [send us feature requests (or bug reports)](https://github.com/lcache/wp-lcache/issues).
Testing WordPress Plugins on Pantheon with CircleCI. Great writeup by Steve Persch on how we’ve been using Behat and CircleCI for WordPress plugin integration tests.
“Like a fine wine with steak, persistent object caches are best paired with complex data generation.”
I’d have to say that 8-bit avatar is pretty awesome.
From Pantheon’s “Level Up: 5 Expert Tips for Optimizing WordPress Performance“
You absolutely need test coverage for your WP REST API project. I wrote a tutorial for Pantheon that I hope becomes the canonical reference point for everyone considering whether or not to write tests for their endpoints — please do.
WP Redis v0.5.0: Performance boost! Removes redundant
exists call from
wp_cache_get(), which easily halves the number of Redis calls.
A quote I put together for an upcoming Pantheon whitepaper on scaling WordPress.
Used appropriately, a persistent object cache like Redis or Memcached can be an incredibly helpful tool for scaling your WordPress site.
Say, for instance, you have an unavoidable query which takes an entire second to run. Or, you need to make an API request to a service that’s notoriously slow. You can mitigate the performance impact in both cases by storing the result of the operation in the persistent object cache, and then using the stored reference when render your response.
Like a fine wine with steak, persistent object caches like Redis or Memcached are best paired with complex data generation. They provide a useful way to store computed data that was expensive to create, and don’t make sense to waste on a cheap meal.