API Rate Limiting

v1.0.0

Rate limiting algorithms, implementation strategies, HTTP conventions, tiered limits, distributed patterns, and client-side handling. Use when protecting APIs from abuse, implementing usage tiers, or configuring gateway-level throttling.

0· 1.7k·1 current·3 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the content: SKILL.md contains algorithms, gateway examples (NGINX, Kong), Redis patterns, HTTP header guidance, client retry patterns and monitoring notes — all appropriate for a rate-limiting skill.
Instruction Scope
Runtime instructions and code snippets remain within the domain of rate limiting. The examples reference helper functions (get_count, increment_count) and Redis, which are reasonable placeholders; nothing in the instructions asks the agent to read unrelated files, exfiltrate secrets, or contact unexpected endpoints. The README includes example install copy commands (local paths) and an npx URL example (documentation only) but the skill itself is instruction-only.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is written to disk by the skill. The README documents manual copy commands and an 'npx add' example pointing at a GitHub tree; those are documentation instructions rather than an automated install spec. As always, verify any external install command before running it.
Credentials
The skill declares no required environment variables, credentials, or config paths. Example configs reference Redis hosts (e.g., redis.internal) which is appropriate for distributed rate limiting and consistent with the stated purpose.
Persistence & Privilege
Flags show default behavior (not always:true). There is no install, no persistent privileges requested, and the skill does not attempt to modify other skills or system settings.
Assessment
This skill is an instruction-only reference about rate limiting and appears internally consistent. Before using: review any example install commands (the README's npx/copy examples) and avoid running unfamiliar scripts or downloads; if you adapt the snippets to production, ensure atomic operations for counters (use the shown Lua script or equivalent), secure your Redis/gateway endpoints, and validate header formats and limits against your privacy/security requirements.

Like a lobster shell, security has layers — review code before you run it.

latestvk974kdqvn9542jt31k97cgs7yd80xd3r
1.7kdownloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Rate Limiting Patterns

Algorithms

AlgorithmAccuracyBurst HandlingBest For
Token BucketHighAllows controlled burstsAPI rate limiting, traffic shaping
Leaky BucketHighSmooths bursts entirelySteady-rate processing, queues
Fixed WindowLowAllows edge bursts (2x)Simple use cases, prototyping
Sliding Window LogVery HighPrecise controlStrict compliance, billing-critical
Sliding Window CounterHighGood approximationProduction APIs — best tradeoff

Fixed window problem: A user sends the full limit at 11:59 and again at 12:01, doubling the effective rate. Sliding window fixes this.

Token Bucket

Bucket holds tokens up to capacity. Tokens refill at a fixed rate. Each request consumes one.

class TokenBucket:
    def __init__(self, capacity: int, refill_rate: float):
        self.capacity = capacity
        self.tokens = capacity
        self.refill_rate = refill_rate  # tokens per second
        self.last_refill = time.monotonic()

    def allow(self) -> bool:
        now = time.monotonic()
        elapsed = now - self.last_refill
        self.tokens = min(self.capacity, self.tokens + elapsed * self.refill_rate)
        self.last_refill = now
        if self.tokens >= 1:
            self.tokens -= 1
            return True
        return False

Sliding Window Counter

Hybrid of fixed window and sliding window log — weights the previous window's count by overlap percentage:

def sliding_window_allow(key: str, limit: int, window_sec: int) -> bool:
    now = time.time()
    current_window = int(now // window_sec)
    position_in_window = (now % window_sec) / window_sec

    prev_count = get_count(key, current_window - 1)
    curr_count = get_count(key, current_window)

    estimated = prev_count * (1 - position_in_window) + curr_count
    if estimated >= limit:
        return False
    increment_count(key, current_window)
    return True

Implementation Options

ApproachScopeBest For
In-memorySingle serverZero latency, no dependencies
Redis (INCR + EXPIRE)DistributedMulti-instance deployments
API GatewayEdgeNo code, built-in dashboards
MiddlewarePer-serviceFine-grained per-user/endpoint control

Use gateway-level limiting as outer defense + application-level for fine-grained control.


HTTP Headers

Always return rate limit info, even on successful requests:

RateLimit-Limit: 1000
RateLimit-Remaining: 742
RateLimit-Reset: 1625097600
Retry-After: 30
HeaderWhen to Include
RateLimit-LimitEvery response
RateLimit-RemainingEvery response
RateLimit-ResetEvery response
Retry-After429 responses only

429 Response Body

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Maximum 1000 requests per hour.",
    "retry_after": 30,
    "limit": 1000,
    "reset_at": "2025-07-01T12:00:00Z"
  }
}

Never return 500 or 503 for rate limiting — 429 is the correct status code.


Rate Limit Tiers

Apply limits at multiple granularities:

ScopeKeyExample LimitPurpose
Per-IPClient IP100 req/minAbuse prevention
Per-UserUser ID1000 req/hrFair usage
Per-API-KeyAPI key5000 req/hrService-to-service
Per-EndpointRoute + key60 req/min on /searchProtect expensive ops

Tiered pricing:

TierRate LimitBurstCost
Free100 req/hr10$0
Pro5,000 req/hr100$49/mo
Enterprise100,000 req/hr2,000Custom

Evaluate from most specific to least specific: per-endpoint > per-user > per-IP.


Distributed Rate Limiting

Redis-based pattern for consistent limiting across instances:

def redis_rate_limit(redis, key: str, limit: int, window: int) -> bool:
    pipe = redis.pipeline()
    now = time.time()
    window_key = f"rl:{key}:{int(now // window)}"
    pipe.incr(window_key)
    pipe.expire(window_key, window * 2)
    results = pipe.execute()
    return results[0] <= limit

Atomic Lua script (prevents race conditions):

local key = KEYS[1]
local limit = tonumber(ARGV[1])
local window = tonumber(ARGV[2])
local current = redis.call('INCR', key)
if current == 1 then
    redis.call('EXPIRE', key, window)
end
return current <= limit and 1 or 0

Never do separate GET then SET — the gap allows overcount.


API Gateway Configuration

NGINX:

http {
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    server {
        location /api/ {
            limit_req zone=api burst=20 nodelay;
            limit_req_status 429;
        }
    }
}

Kong:

plugins:
  - name: rate-limiting
    config:
      minute: 60
      hour: 1000
      policy: redis
      redis_host: redis.internal

Client-Side Handling

Clients must handle 429 gracefully:

async function fetchWithRetry(url: string, maxRetries = 3): Promise<Response> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const res = await fetch(url);
    if (res.status !== 429) return res;

    const retryAfter = res.headers.get('Retry-After');
    const delay = retryAfter
      ? parseInt(retryAfter, 10) * 1000
      : Math.min(1000 * 2 ** attempt, 30000);
    await new Promise(r => setTimeout(r, delay));
  }
  throw new Error('Rate limit exceeded after retries');
}
  • Always respect Retry-After when present
  • Use exponential backoff with jitter when absent
  • Implement request queuing for batch operations

Monitoring

Track these metrics:

  • Rate limit hit rate — % of requests returning 429 (alert if >5% sustained)
  • Near-limit warnings — requests where remaining < 10% of limit
  • Top offenders — keys/IPs hitting limits most frequently
  • Limit headroom — how close normal traffic is to the ceiling
  • False positives — legitimate users being rate limited

Anti-Patterns

Anti-PatternFix
Application-only limitingAlways combine with infrastructure-level limits
No retry guidanceAlways include Retry-After header on 429
Inconsistent limitsSame endpoint, same limits across services
No burst allowanceAllow controlled bursts for legitimate traffic
Silent droppingAlways return 429 so clients can distinguish from errors
Global single counterPer-endpoint counters to protect expensive operations
Hard-coded limitsUse configuration, not code constants

NEVER Do

  1. NEVER rate limit health check endpoints — monitoring systems will false-alarm
  2. NEVER use client-supplied identifiers as sole rate limit key — trivially spoofed
  3. NEVER return 200 OK when rate limiting — clients must know they were throttled
  4. NEVER set limits without measuring actual traffic first — you'll block legitimate users or set limits too high to matter
  5. NEVER share counters across unrelated tenants — noisy neighbor problem
  6. NEVER skip rate limiting on internal APIs — misbehaving internal services can take down shared infrastructure
  7. NEVER implement rate limiting without logging — you need visibility to tune limits and detect abuse

Comments

Loading comments...