Key Takeaways
Object cache stores database query results in memory so WordPress doesn't re-query the database on every request
Redis helps logged-in users, wp-admin, and WooCommerce dynamic pages that can't use page caching
If your wp_options autoload data is over 2MB, Redis is caching bloat and amplifying the problem instead of fixing it
A cache hit ratio below 80% means Redis isn't effective and something needs investigation
Clean your database before adding Redis. Caching dirty data is faster dirty data, not a fix
The advice everyone gives (without explaining it)
Every WordPress performance guide includes the same bullet point: "Install Redis for object caching." And that's where the advice stops.
Nobody explains what object caching actually does at the database level. Nobody mentions when it makes things worse. Nobody talks about monitoring hit ratios, setting the right eviction policy, or diagnosing a cache that's consuming memory without improving speed.
I've set up Redis on dozens of client sites. On most of them, it helped significantly. On some, it made no measurable difference. On a few, it made things worse because the underlying database was so bloated that Redis was caching megabytes of garbage data.
Here's what Redis actually does for WordPress, when to use it, and how to configure it so it works the way it should.
What object caching does
WordPress talks to the database constantly. Loading a single page can trigger 30-100+ database queries: fetching post content, loading plugin settings from wp_options, pulling user metadata, checking permissions, loading widget configurations, and reading theme settings.
Without object cache, every query hits MySQL/MariaDB. The database processes the query, returns the result, and WordPress uses it. On the next page load, the same queries run again, returning the same data.
With Redis as an object cache, the flow changes:
- WordPress needs data (e.g.,
get_option('siteurl')) - WordPress checks Redis first
- If Redis has the value (cache hit), it returns immediately from memory. No database query.
- If Redis doesn't have the value (cache miss), WordPress queries the database, gets the result, stores it in Redis, and returns it
- Next time WordPress needs the same value, it's already in Redis
Database queries take 1-50ms depending on the query complexity and table size. Redis lookups take 0.1-0.5ms. On a page with 80 queries, if 70 of them hit Redis instead of the database, you save 70-3,500ms of database time per page load.
What it helps
- wp-admin: the WordPress admin panel is not page-cached. Every page load hits the database. Redis caches plugin settings, user preferences, and dashboard data so wp-admin feels faster.
- Logged-in users: page caching typically doesn't serve cached pages to logged-in users (the page content may be personalized). Redis caches the database queries those pages generate.
- WooCommerce: cart, checkout, and my-account pages can't be page-cached because they contain session-specific data. Redis caches the product data, settings, and configuration queries those pages need.
- REST API requests: if your site serves API requests (headless WordPress, mobile app backends, AJAX-heavy themes), each request hits the database. Redis reduces the per-request database load.
What it doesn't help
- Anonymous visitors on page-cached pages: if WP Rocket or another page cache is serving a cached HTML file, WordPress doesn't execute at all. No PHP, no database queries, no object cache lookups. Redis is irrelevant for these requests.
- Static assets: images, CSS, JavaScript, fonts. These are served by the web server directly. Redis has nothing to do with them.
- Slow queries caused by missing indexes: if a query takes 2 seconds because wp_postmeta has no meta_value index, Redis caches the result after the first slow execution. Subsequent requests are fast from cache, but the first request (and every cache miss after eviction) is still 2 seconds. Fix the query, don't mask it with cache.
When Redis makes things worse
This is the part nobody talks about.
Caching bloated wp_options data
WordPress loads all autoloaded options from wp_options on every request. If your autoload data is 8MB, WordPress pulls 8MB from the database, processes it, and if Redis is active, stores it in Redis.
Now Redis has 8MB of bloated option data in memory. Every request pulls 8MB from Redis instead of the database. It's faster (memory vs disk), but you're still loading 8MB into PHP memory per request. You've made the symptom less visible without fixing the problem.
Worse: that 8MB in Redis consumes memory that could be used for caching useful query results. If you gave Redis 128MB, 6% of your cache is one blob of option data that shouldn't be that large.
Fix the data first. Clean your wp_options table to get autoload data under 1MB. Then Redis caches the clean data efficiently.
Low hit ratio from high data diversity
Some WordPress setups generate highly varied queries that don't repeat often. A site with 50,000 products where visitors browse different product combinations will have many unique meta queries. Redis caches result A, but the next request needs result B. By the time another visitor needs result A, it's been evicted.
If your cache hit ratio is below 60%, Redis is spending more time storing and evicting data than it saves on database queries. The memory and CPU overhead of running Redis outweigh the benefits.
Incorrect maxmemory-policy
If Redis runs out of memory and maxmemory-policy is set to noeviction, Redis starts returning errors instead of evicting old keys. WordPress interprets these as cache misses and falls back to the database, but the error handling adds overhead. Some plugins don't handle Redis errors gracefully and throw PHP warnings or break functionality.
Always use allkeys-lru for WordPress object caching. This evicts the least recently used keys when memory is full, which is the correct behavior for a cache.
Setting up Redis correctly
Install Redis
On Ubuntu/Debian:
sudo apt install redis-serverOr with Docker (from the production stack guide):
redis:
image: redis:7-alpine
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru --save ""Configure Redis for WordPress caching
Edit /etc/redis/redis.conf (or pass flags in Docker):
# Memory limit
maxmemory 128mb
# Eviction policy - LRU is correct for caching
maxmemory-policy allkeys-lru
# Disable disk persistence - this is a cache, not a database
save ""
appendonly no
# Unix socket (faster than TCP on same server)
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
# Connection limits
maxclients 100
timeout 300Connect WordPress to Redis
Install the Redis Object Cache plugin by Till Kruss.
Add to wp-config.php:
// TCP connection (default)
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
// OR Unix socket (faster, same-server only)
// define('WP_REDIS_SCHEME', 'unix');
// define('WP_REDIS_PATH', '/var/run/redis/redis.sock');
// Database index (use different numbers for different WordPress sites on the same Redis server)
define('WP_REDIS_DATABASE', 0);
// Optional: key prefix to avoid collisions in multi-site setups
define('WP_REDIS_PREFIX', 'wp_mysite_');Activate the plugin, go to Settings > Redis, click "Enable Object Cache." The status should show "Connected" with a green indicator.
Monitoring Redis performance
The metrics that matter
redis-cli INFO stats | grep -E "keyspace_hits|keyspace_misses"Calculate the hit ratio:
hit_ratio = keyspace_hits / (keyspace_hits + keyspace_misses) * 100
| Hit ratio | Meaning |
|---|---|
| 90%+ | Excellent. Redis is working well. |
| 80-90% | Good. Normal for WooCommerce sites with diverse product queries. |
| 60-80% | Mediocre. Investigate what's causing misses. May need more memory or data cleanup. |
| Below 60% | Poor. Redis may not be beneficial for this workload. |
Check memory usage
redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|mem_fragmentation"used_memory_human: how much memory Redis is actually usingmaxmemory_human: the limit you setmem_fragmentation_ratio: should be between 1.0 and 1.5. Above 1.5 means memory fragmentation (consider restarting Redis). Below 1.0 means the OS is swapping Redis data to disk (increase maxmemory or reduce data).
Check evictions
redis-cli INFO stats | grep evicted_keysIf evicted_keys is growing steadily, Redis is full and actively removing cached data to make room. This isn't necessarily bad (it's how LRU caching works), but if it's growing fast, your maxmemory is too low for your workload.
Check what's in the cache
# Count keys by prefix
redis-cli --scan --pattern "wp_*" | awk -F: '{print $1}' | sort | uniq -c | sort -rn | head 20This shows which WordPress components are using the most cache space. Common prefixes:
wp_options:alloptions: the autoloaded options blob (should be small if your database is clean)wp_:posts:: cached post objectswp_:post_meta:: cached postmetawp_:transient:: cached transients (these are also in wp_options, Redis serves them faster)wp_:term_relationships:: taxonomy data
If one prefix dominates (e.g., alloptions is 5MB), that's where to focus your optimization.
Tuning for WooCommerce
WooCommerce benefits more from Redis than a standard WordPress site because of the volume of uncacheable (by page cache) requests: cart interactions, checkout, AJAX add-to-cart, product filtering, and admin order management.
WooCommerce-specific Redis settings
// In wp-config.php
// Increase the maximum TTL for WooCommerce data
define('WP_REDIS_MAXTTL', 86400); // 24 hours
// Disable Redis for specific groups if needed
// define('WP_REDIS_DISABLED', false);Session handling with Redis
By default, WooCommerce stores session data in wp_options. This bloats the options table and slows down autoload queries. You can move session storage to Redis:
// In wp-config.php
define('WP_REDIS_SESSIONS', true);This stores PHP sessions in Redis instead of the filesystem or database. Benefits: faster session reads, no session files on disk, automatic expiration handled by Redis TTL.
Monitor WooCommerce-specific cache usage
After running your store with Redis for 24 hours, check which WooCommerce data is being cached:
redis-cli --scan --pattern "*wc_*" | wc -l
redis-cli --scan --pattern "*product*" | wc -lIf product-related keys are being evicted frequently (high evicted_keys count and product data is a major portion of the cache), increase maxmemory for Redis. Product data is what you want cached most on a WooCommerce store.
Troubleshooting
Redis is connected but no speed improvement
- Check the hit ratio. If it's below 60%, Redis isn't caching effectively for your workload.
- Check if page caching is already handling most requests. If 95% of your traffic is anonymous and WP Rocket serves them from HTML cache, Redis only helps the 5% of logged-in/dynamic requests.
- Check if the bottleneck is elsewhere. Redis speeds up database reads. If your slowness comes from slow PHP execution, large external API calls, or unoptimized images, Redis won't help.
Redis memory keeps growing
Redis should stabilize at or near your maxmemory setting. If used_memory exceeds maxmemory, your eviction policy isn't working. Check that maxmemory-policy is set to allkeys-lru and not noeviction.
"Connection refused" errors in WordPress
Common causes:
- Redis isn't running:
systemctl status redis - Wrong host/port in wp-config.php
- Redis is bound to localhost and WordPress is connecting from a different host (common in Docker setups; use the service name instead of 127.0.0.1)
- Redis maxclients reached: increase the limit in redis.conf
Cache data is stale after updating content
The Redis Object Cache plugin automatically invalidates cached data when WordPress updates posts, options, or metadata. If you're seeing stale data:
- Try flushing the cache: Settings > Redis > Flush Cache
- Check if a separate full-page cache (Cloudflare, Varnish) is serving stale HTML on top of Redis
- Check if a plugin is bypassing WordPress's cache invalidation hooks
The right order of operations
Based on the performance work I do across client sites, here's when Redis should be added relative to other optimizations:
- First: Clean the database. Remove dead data from wp_options, wp_postmeta, and Action Scheduler tables.
- Second: Add missing indexes. Fix slow queries at the database level.
- Third: Set up page caching. WP Rocket + Cloudflare handles anonymous visitor speed.
- Fourth: Add Redis. Now you're caching clean data with efficient queries behind a page cache.
- Fifth: Tune PHP-FPM. With database load reduced by Redis, each PHP worker finishes faster and you can handle more concurrent users.
Adding Redis at step 1 (before cleaning the database) caches garbage. Adding it at step 5 (after everything else) means you've already solved most of the performance problem and Redis adds diminishing returns.
Step 4 is the sweet spot. The database is clean, queries are indexed, page cache handles the bulk of traffic, and Redis speeds up everything the page cache can't serve.
Related reading
- How to Clean Your wp_options Table (The Right Way). Clean the data before you cache it. Redis caching a 10MB autoload blob doesn't fix the 10MB problem.
- WordPress on Docker: A Production-Ready Stack. The complete containerized stack including Redis configuration.
- PHP-FPM Tuning for WordPress: Stop Guessing, Start Measuring. Tune PHP-FPM after adding Redis, when the per-request database load is lower.
- WP Rocket + Cloudflare: The Correct Configuration. Page caching sits in front of Redis. Configure it first.

Written by
Barry van Biljon
Full-stack developer specializing in high-performance web applications with React, Next.js, and WordPress.
Ready to Get Started?
Have questions about implementing these strategies? Our team is here to help you build high-performance web applications that drive results.
