5 Redis Patterns for Real-Time Laravel Apps That Scale

Fred· AI Engineer & Developer Educator6 min read

After migrating from Redis to KeyDB and back again while processing billions in logistics transactions, I've learned which Redis patterns work at scale. Here are 5 battle-tested patterns you can implement today.

1. Pipeline Everything (10x Performance Boost)

Bad Pattern:

// This creates 1000 network round trips!
foreach ($shipments as $shipment) {
    Redis::set("shipment:{$shipment->id}", $shipment->toJson());
    Redis::expire("shipment:{$shipment->id}", 3600);
}

Production Pattern:

// One network call, atomic execution
Redis::pipeline(function ($pipe) use ($shipments) {
    foreach ($shipments as $shipment) {
        $pipe->setex(
            "shipment:{$shipment->id}",
            3600,
            $shipment->toJson()
        );
    }
});

Real Impact: Reduced our bulk import time from 45 seconds to 4 seconds for 10k records.

2. Implement Sliding Window Rate Limiting

Forget fixed windows that cause thundering herds. Here's what we use in production:

class RateLimiter
{
    public static function attempt($key, $max = 60, $decay = 60)
    {
        $key = "rate_limit:{$key}";
        $now = microtime(true);
        $window = $now - $decay;

        Redis::pipeline(function ($pipe) use ($key, $now, $window, $max) {
            // Remove old entries
            $pipe->zremrangebyscore($key, 0, $window);

            // Add current request
            $pipe->zadd($key, $now, $now);

            // Count requests in window
            $pipe->zcard($key);

            // Set expiry
            $pipe->expire($key, $decay + 1);
        });

        $results = $pipe->execute();
        $count = $results[2];

        return $count <= $max;
    }
}

// Usage in middleware
if (!RateLimiter::attempt("api:{$request->ip()}", 100, 60)) {
    return response('Rate limit exceeded', 429);
}

Why It Works: No spikes at window boundaries, fair distribution, self-cleaning.

3. Cache Invalidation with Tags (The Right Way)

Laravel's cache tags are great until you need granular control. Here's our pattern:

class SmartCache
{
    public static function rememberWithDependencies($key, $tags, $ttl, $callback)
    {
        // Store the main cache
        $value = Cache::remember($key, $ttl, $callback);

        // Track dependencies in Redis sets
        Redis::pipeline(function ($pipe) use ($key, $tags) {
            foreach ($tags as $tag) {
                $pipe->sadd("cache_tag:{$tag}", $key);
                $pipe->expire("cache_tag:{$tag}", 86400); // 1 day
            }
        });

        return $value;
    }

    public static function invalidateTag($tag)
    {
        $keys = Redis::smembers("cache_tag:{$tag}");

        if (!empty($keys)) {
            // Delete all tagged keys in one pipeline
            Redis::pipeline(function ($pipe) use ($keys, $tag) {
                foreach ($keys as $key) {
                    $pipe->del($key);
                }
                $pipe->del("cache_tag:{$tag}");
            });
        }
    }
}

// Usage
$metrics = SmartCache::rememberWithDependencies(
    'dashboard.metrics',
    ['shipments', 'deliveries', "customer:{$customerId}"],
    300, // 5 minutes
    fn() => $this->calculateExpensiveMetrics()
);

// Invalidate all shipment-related caches
SmartCache::invalidateTag('shipments');

Production Win: Reduced cache misses by 73% and eliminated cascading invalidations.

4. Distributed Locks That Don't Deadlock

After losing $50k to a race condition, we implemented this bulletproof locking:

class DistributedLock
{
    public static function acquire($resource, $timeout = 10)
    {
        $lockKey = "lock:{$resource}";
        $identifier = uniqid(gethostname(), true);

        // Atomic set if not exists with expiry
        $acquired = Redis::set(
            $lockKey,
            $identifier,
            'NX', // Only set if not exists
            'EX', // Expire after X seconds
            $timeout
        );

        if ($acquired) {
            return $identifier;
        }

        // Check if lock is stale (backup mechanism)
        $lockHolder = Redis::get($lockKey);
        if (!$lockHolder) {
            // Lock expired between commands, try again
            return self::acquire($resource, $timeout);
        }

        return false;
    }

    public static function release($resource, $identifier)
    {
        $lockKey = "lock:{$resource}";

        // Lua script for atomic check and delete
        $script = "
            if redis.call('get', KEYS[1]) == ARGV[1] then
                return redis.call('del', KEYS[1])
            else
                return 0
            end
        ";

        return Redis::eval($script, 1, $lockKey, $identifier);
    }

    public static function withLock($resource, $callback, $timeout = 10)
    {
        $identifier = self::acquire($resource, $timeout);

        if (!$identifier) {
            throw new LockTimeoutException("Could not acquire lock for {$resource}");
        }

        try {
            return $callback();
        } finally {
            self::release($resource, $identifier);
        }
    }
}

// Usage for payment processing
DistributedLock::withLock("payment:{$invoice->id}", function () use ($invoice) {
    // Process payment safely
    if ($invoice->isPaid()) {
        return; // Idempotent check
    }

    $invoice->processPayment();
    $invoice->markAsPaid();
});

Critical: The Lua script ensures we only release OUR lock, preventing accidental release of someone else's lock.

5. Real-Time Metrics with HyperLogLog

Track unique visitors/events without memory explosion:

class MetricsTracker
{
    public static function trackUnique($metric, $identifier, $window = 3600)
    {
        $key = "metric:{$metric}:" . floor(time() / $window);

        // HyperLogLog adds unique items with O(1) space
        Redis::pfadd($key, $identifier);
        Redis::expire($key, $window * 2); // Keep 2 windows

        return Redis::pfcount($key);
    }

    public static function getCardinality($metric, $windows = 1, $windowSize = 3600)
    {
        $keys = [];
        $now = time();

        for ($i = 0; $i < $windows; $i++) {
            $keys[] = "metric:{$metric}:" . floor(($now - ($i * $windowSize)) / $windowSize);
        }

        // Merge multiple HyperLogLogs
        return Redis::pfcount($keys);
    }

    public static function trackAndBroadcast($event, $userId)
    {
        // Track unique event
        $count = self::trackUnique("event:{$event}:users", $userId);

        // Track per-minute rate
        $rate = self::trackUnique("event:{$event}:rate", uniqid(), 60);

        // Broadcast if threshold reached
        if ($rate > 1000) {
            broadcast(new HighTrafficAlert($event, $rate));
        }

        return $count;
    }
}

// Usage
$uniqueVisitors = MetricsTracker::trackUnique('page.visits', $request->ip());
$dailyActive = MetricsTracker::getCardinality('user.active', 24, 3600);

// Track API usage without memory issues
MetricsTracker::trackAndBroadcast('api.call', $user->id);

Memory Savings: Tracking 10M unique users takes ~12KB instead of 40MB with traditional sets.

Bonus: Redis Memory Optimization Checklist

From our production playbook:

// 1. Use compression for large values
Redis::setex(
    "large:{$id}",
    3600,
    gzcompress(json_encode($data), 9)
);

// 2. Use hashes for small objects (90% memory saving!)
Redis::hset("user:{$id}", [
    'name' => $user->name,
    'email' => $user->email,
    'status' => $user->status
]);

// 3. Set aggressive expiries
Redis::setex($key, 300, $value); // 5 min default, not 1 hour

// 4. Use SCAN instead of KEYS
$cursor = 0;
do {
    [$cursor, $keys] = Redis::scan($cursor, 'MATCH', 'shipment:*', 'COUNT', 100);
    // Process $keys
} while ($cursor !== 0);

// 5. Monitor memory usage
$info = Redis::info('memory');
if ($info['used_memory'] > 1073741824) { // 1GB
    alert("Redis memory critical: {$info['used_memory_human']}");
}

The Expensive Lessons

  1. Always set expiries - One missing TTL consumed 8GB of RAM
  2. Pipeline or perish - Network latency adds up fast
  3. Use the right data structure - HyperLogLog saved us $2k/month in memory
  4. Lock properly - Race conditions in financial systems = lawsuits
  5. Monitor everything - You can't fix what you don't measure

These patterns handle 50k requests/second in production. Start with pipelines and proper locking—they'll solve 80% of your Redis performance issues.

Master Laravel with Real Projects

Want to implement these Redis patterns in a real application? Build production-ready Laravel projects from scratch:

Each tutorial includes AI-assisted prompts to guide you through building scalable applications that use Redis effectively.

Questions about implementing these patterns? Drop a comment - I've probably debugged that issue at 3 AM.

Fred

Fred

AUTHOR

Full-stack developer with 10+ years building production applications. I write about cloud deployment, DevOps, and modern web development from real-world experience.