It was 2 AM when PagerDuty woke me up. Our WebSocket servers were down, 3,000 active users disconnected, and our logistics dashboard was blind. The culprit? One customer's "helpful" Python script sending 50,000 location updates per second. Here's the rate limiting system we built to never let that happen again.
The Multi-Layer Defense System
Layer 1: Connection-Level Throttling
Limit connections per IP before they even authenticate:
// app/Websocket/Middleware/ConnectionThrottle.php
namespace App\Websocket\Middleware;
use Ratchet\ConnectionInterface;
use Illuminate\Support\Facades\Redis;
class ConnectionThrottle
{
private $maxConnectionsPerIp = 10;
private $maxConnectionsPerUser = 5;
private $connectionWindow = 60; // seconds
public function onOpen(ConnectionInterface $conn)
{
$ip = $conn->remoteAddress;
// Check IP-based limits
$ipKey = "ws:connections:ip:{$ip}";
$ipCount = Redis::incr($ipKey);
if ($ipCount === 1) {
Redis::expire($ipKey, $this->connectionWindow);
}
if ($ipCount > $this->maxConnectionsPerIp) {
$conn->send(json_encode([
'error' => 'Too many connections from your IP',
'retry_after' => $this->connectionWindow
]));
$conn->close();
Redis::decr($ipKey);
return false;
}
// Store connection metadata
$conn->metadata = [
'ip' => $ip,
'connected_at' => microtime(true),
'message_count' => 0,
'last_message_at' => null,
'rate_limit_hits' => 0
];
return true;
}
public function onClose(ConnectionInterface $conn)
{
// Clean up connection counts
$ip = $conn->metadata['ip'] ?? null;
if ($ip) {
Redis::decr("ws:connections:ip:{$ip}");
}
if (isset($conn->userId)) {
Redis::decr("ws:connections:user:{$conn->userId}");
}
}
}Layer 2: Message Rate Limiting (Token Bucket Algorithm)
This is the gold standard for smooth rate limiting without hard cutoffs:
// app/Websocket/RateLimiter/TokenBucket.php
namespace App\Websocket\RateLimiter;
use Illuminate\Support\Facades\Redis;
class TokenBucket
{
private $capacity; // Maximum tokens
private $refillRate; // Tokens per second
private $refillAmount; // Tokens added per refill
public function __construct($capacity = 100, $refillRate = 10)
{
$this->capacity = $capacity;
$this->refillRate = $refillRate;
$this->refillAmount = $refillRate;
}
public function consume($key, $tokens = 1)
{
$bucketKey = "ws:bucket:{$key}";
// Lua script for atomic token bucket
$script = '
local key = KEYS[1]
local capacity = tonumber(ARGV[1])
local refill_rate = tonumber(ARGV[2])
local tokens_requested = tonumber(ARGV[3])
local now = tonumber(ARGV[4])
-- Get current bucket state
local bucket = redis.call("HMGET", key, "tokens", "last_refill")
local current_tokens = tonumber(bucket[1]) or capacity
local last_refill = tonumber(bucket[2]) or now
-- Calculate tokens to add based on time passed
local time_passed = now - last_refill
local tokens_to_add = math.floor(time_passed * refill_rate)
-- Refill bucket (up to capacity)
current_tokens = math.min(capacity, current_tokens + tokens_to_add)
-- Check if we can consume requested tokens
if current_tokens >= tokens_requested then
-- Consume tokens
current_tokens = current_tokens - tokens_requested
-- Update bucket state
redis.call("HMSET", key,
"tokens", current_tokens,
"last_refill", now
)
redis.call("EXPIRE", key, 3600)
return {1, current_tokens} -- Success
else
-- Not enough tokens
redis.call("HMSET", key,
"tokens", current_tokens,
"last_refill", now
)
redis.call("EXPIRE", key, 3600)
local wait_time = (tokens_requested - current_tokens) / refill_rate
return {0, wait_time} -- Failure with wait time
end
';
$result = Redis::eval(
$script,
1,
$bucketKey,
$this->capacity,
$this->refillRate,
$tokens,
microtime(true)
);
return [
'allowed' => $result[0] === 1,
'remaining' => $result[0] === 1 ? $result[1] : 0,
'retry_after' => $result[0] === 0 ? $result[1] : 0
];
}
}
// Usage in WebSocket handler
class MessageHandler
{
private $globalBucket;
private $userBuckets = [];
public function __construct()
{
// Global rate limit: 1000 messages/sec across all users
$this->globalBucket = new TokenBucket(1000, 100);
}
public function onMessage(ConnectionInterface $conn, $msg)
{
// User-specific bucket (lazy initialization)
if (!isset($this->userBuckets[$conn->userId])) {
$this->userBuckets[$conn->userId] = new TokenBucket(
capacity: 60, // 60 messages burst
refillRate: 1 // 1 message per second sustained
);
}
// Check global rate limit first
$globalResult = $this->globalBucket->consume('global', 1);
if (!$globalResult['allowed']) {
return $this->throttleResponse($conn, $globalResult['retry_after']);
}
// Check user rate limit
$userResult = $this->userBuckets[$conn->userId]->consume("user:{$conn->userId}", 1);
if (!$userResult['allowed']) {
return $this->throttleResponse($conn, $userResult['retry_after']);
}
// Check message-type specific limits
$msgData = json_decode($msg, true);
if (isset($msgData['type'])) {
$typeResult = $this->checkTypeLimit($conn->userId, $msgData['type']);
if (!$typeResult['allowed']) {
return $this->throttleResponse($conn, $typeResult['retry_after']);
}
}
// Process message normally
$this->processMessage($conn, $msg);
}
private function checkTypeLimit($userId, $type)
{
// Different limits for different message types
$limits = [
'location_update' => ['capacity' => 10, 'rate' => 0.5], // 10 burst, 1 per 2 sec
'status_change' => ['capacity' => 5, 'rate' => 0.2], // 5 burst, 1 per 5 sec
'bulk_update' => ['capacity' => 2, 'rate' => 0.033], // 2 burst, 1 per 30 sec
'file_upload' => ['capacity' => 1, 'rate' => 0.0167], // 1 burst, 1 per minute
];
$config = $limits[$type] ?? ['capacity' => 30, 'rate' => 1];
$bucket = new TokenBucket($config['capacity'], $config['rate']);
return $bucket->consume("user:{$userId}:type:{$type}");
}
private function throttleResponse($conn, $retryAfter)
{
$conn->metadata['rate_limit_hits']++;
// Exponential backoff for repeat offenders
if ($conn->metadata['rate_limit_hits'] > 10) {
$retryAfter *= min($conn->metadata['rate_limit_hits'] / 10, 5);
}
$conn->send(json_encode([
'type' => 'rate_limit',
'retry_after' => round($retryAfter, 2),
'message' => 'Slow down! Too many messages.'
]));
// Auto-disconnect abusive connections
if ($conn->metadata['rate_limit_hits'] > 100) {
$this->banUser($conn->userId, 3600); // 1 hour ban
$conn->close();
}
}
}Layer 3: Intelligent Circuit Breaker
Protect your backend from WebSocket message floods:
// app/Websocket/CircuitBreaker.php
class CircuitBreaker
{
private $thresholds = [
'error_rate' => 0.5, // 50% errors
'volume_threshold' => 20, // Minimum requests to evaluate
'timeout_duration' => 30, // Seconds to wait when open
'success_threshold' => 5 // Successes needed to close
];
public function call($service, callable $callback)
{
$state = $this->getState($service);
if ($state === 'open') {
$lastOpen = Redis::get("circuit:{$service}:opened_at");
if (time() - $lastOpen < $this->thresholds['timeout_duration']) {
throw new CircuitOpenException("Circuit breaker is OPEN for {$service}");
}
// Try half-open state
$this->setState($service, 'half-open');
}
try {
$result = $callback();
$this->recordSuccess($service);
if ($state === 'half-open') {
$successes = Redis::incr("circuit:{$service}:successes");
if ($successes >= $this->thresholds['success_threshold']) {
$this->setState($service, 'closed');
}
}
return $result;
} catch (\Exception $e) {
$this->recordFailure($service);
if ($this->shouldOpen($service)) {
$this->setState($service, 'open');
Redis::set("circuit:{$service}:opened_at", time());
}
throw $e;
}
}
private function shouldOpen($service)
{
$window = 60; // 1 minute window
$now = time();
$windowStart = $now - $window;
// Count recent requests
$total = Redis::zcount("circuit:{$service}:requests", $windowStart, $now);
$errors = Redis::zcount("circuit:{$service}:errors", $windowStart, $now);
if ($total < $this->thresholds['volume_threshold']) {
return false;
}
$errorRate = $errors / $total;
return $errorRate >= $this->thresholds['error_rate'];
}
}
// Usage: Protect database from WebSocket floods
$circuitBreaker = new CircuitBreaker();
$conn->on('message', function ($msg) use ($circuitBreaker, $conn) {
try {
$circuitBreaker->call('shipment_updates', function () use ($msg) {
// Your database operation
Shipment::where('tracking', $msg->tracking)->update($msg->data);
});
} catch (CircuitOpenException $e) {
$conn->send(json_encode([
'error' => 'Service temporarily unavailable',
'retry_after' => 30
]));
}
});Layer 4: Adaptive Rate Limiting
Automatically adjust limits based on system load:
class AdaptiveRateLimiter
{
public function getLimit($userId)
{
// Base limits
$baseLimit = 60;
$premium = User::find($userId)->isPremium();
$limit = $premium ? $baseLimit * 2 : $baseLimit;
// Adjust based on system load
$systemLoad = $this->getSystemLoad();
if ($systemLoad > 0.8) {
$limit *= 0.5; // Halve limits under high load
} elseif ($systemLoad > 0.6) {
$limit *= 0.75; // Reduce by 25%
}
// Reputation-based adjustment
$reputation = $this->getUserReputation($userId);
if ($reputation < 0.3) {
$limit *= 0.5; // Bad actors get less
} elseif ($reputation > 0.8) {
$limit *= 1.5; // Good users get more
}
return max(10, (int)$limit); // Minimum 10 messages
}
private function getSystemLoad()
{
// Check Redis memory usage
$info = Redis::info('memory');
$memoryUsage = $info['used_memory'] / $info['maxmemory'];
// Check queue size
$queueSize = Redis::llen('broadcasts');
$queueLoad = min(1, $queueSize / 10000);
// Check active connections
$activeConnections = Redis::get('ws:active_connections');
$connectionLoad = min(1, $activeConnections / 5000);
// Weighted average
return ($memoryUsage * 0.3 + $queueLoad * 0.4 + $connectionLoad * 0.3);
}
private function getUserReputation($userId)
{
$key = "user:reputation:{$userId}";
// Track good and bad behavior
$goodActions = Redis::hget($key, 'good') ?? 0;
$badActions = Redis::hget($key, 'bad') ?? 0;
$total = $goodActions + $badActions;
if ($total < 100) {
return 0.5; // Neutral for new users
}
return $goodActions / $total;
}
}Production Monitoring Dashboard
Track rate limiting effectiveness:
// app/Http/Controllers/RateLimitMetricsController.php
public function metrics()
{
return [
'current_connections' => Redis::get('ws:active_connections'),
'rate_limit_hits' => Redis::get('metrics:rate_limit_hits:' . date('Y-m-d-H')),
'banned_users' => Redis::scard('ws:banned_users'),
'circuit_breaker_state' => [
'shipment_updates' => Redis::get('circuit:shipment_updates:state'),
'database_writes' => Redis::get('circuit:database_writes:state'),
],
'top_offenders' => $this->getTopOffenders(),
'system_load' => (new AdaptiveRateLimiter())->getSystemLoad(),
];
}
private function getTopOffenders()
{
return Redis::zrevrange('ws:rate_limit:offenders', 0, 10, 'WITHSCORES');
}Emergency Kill Switch
When things go really wrong:
# Emergency Redis commands to restore service
redis-cli FLUSHDB # Nuclear option - disconnects everyone
redis-cli DEL ws:connections:* # Reset connection counts
redis-cli DEL ws:bucket:* # Reset all rate limits
redis-cli DEL circuit:* # Reset circuit breakersLessons from the Trenches
- Log Everything: We caught a DDoS attempt because we logged rate limit hits
- Different Limits for Different Operations: Bulk updates need stricter limits than status checks
- Circuit Breakers Save Databases: They prevented cascade failures 3 times last month
- Token Buckets > Fixed Windows: Smoother experience, no thundering herds
- Always Have a Kill Switch: Sometimes you need to drop all connections NOW
The system above handles 10k concurrent connections with bad actors trying to abuse it daily. It's not perfect, but it's kept us online for 2 years straight.
Learn Laravel WebSockets from Scratch
Ready to build real-time applications with WebSockets? Start with these comprehensive Laravel tutorials:
- Build a Blog with Laravel - Master the fundamentals: routing, Eloquent, and authentication
- Build a Portfolio with Laravel - Add real-time features like live notifications and status updates
- Build E-Commerce with Laravel - Implement advanced patterns including real-time inventory and order tracking
Each tutorial includes step-by-step AI prompts to guide you from basics to production-ready applications with WebSocket support.
Got war stories about WebSocket rate limiting? Drop them in the comments - we're all in this together.
Fred
AUTHORFull-stack developer with 10+ years building production applications. I write about cloud deployment, DevOps, and modern web development from real-world experience.

