caching

Caching strategies, invalidation, eviction policies, HTTP caching, distributed caching, and anti-patterns. Use when designing cache layers, choosing eviction policies, debugging stale data, or optimizing read-heavy workloads.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 808 · 6 current installs · 6 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is purely documentation about caching strategies, HTTP caching, eviction policies, distributed caching, and anti-patterns. It requests no binaries, env vars, or config paths — all of which align with a knowledge/guide-style skill.
Instruction Scope
SKILL.md contains explanatory guidance, recipes, and examples for caching behavior. It does not instruct the agent to read unrelated system files, access credentials, or send data to external endpoints.
Install Mechanism
There is no formal install spec (instruction-only), which is low risk. The README contains an example 'npx add https://github.com/…/tree/…' command that is non-standard (GitHub tree URL) — it's an odd or potentially invalid usage of npx rather than a tracked package install. This is a documentation oddity rather than executable install metadata in the skill itself; avoid running ad-hoc install commands from unknown URLs.
Credentials
The skill requests no environment variables, credentials, or config paths. That is proportionate for a read-only documentation skill.
Persistence & Privilege
always is false and the skill is instruction-only with no code to persist or run on install. It does not request elevated or persistent platform privileges.
Assessment
This skill is a documentation/reference skill about caching and appears internally consistent. Before installing, verify the skill source (there's no homepage and the owner ID is opaque). Do not run unfamiliar install commands you find in README (especially ad-hoc npx commands pointing at arbitrary GitHub paths). If you want the content, prefer reviewing SKILL.md/README locally or copying the text rather than executing remote install scripts. If you plan to add code from an external repo, inspect the repository contents first. Overall risk is low for this instruction-only skill, but always avoid providing secrets or running unreviewed installs.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972z62c31a7z1w8f7pj41fren80wke2

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Caching Patterns

A well-placed cache is the cheapest way to buy speed. A misplaced cache is the most expensive way to buy bugs.

Cache Strategies

StrategyHow It WorksWhen to Use
Cache-Aside (Lazy)App checks cache → miss → reads DB → writes to cacheDefault choice — general purpose
Read-ThroughCache fetches from DB on miss automaticallyORM-integrated caching, CDN origin fetch
Write-ThroughWrites go to cache AND DB synchronouslyRead-heavy with strong consistency
Write-BehindWrites go to cache, async flush to DBHigh write throughput, eventual consistency OK
Refresh-AheadCache proactively refreshes before expiryPredictable access patterns, low-latency critical
Cache-Aside Flow:

  App ──► Cache ──► HIT? ──► Return data
              │
              ▼ MISS
          Read DB ──► Store in Cache ──► Return data

Cache Invalidation

MethodConsistencyWhen to Use
TTL-basedEventual (up to TTL)Simple data, acceptable staleness
Event-basedStrong (near real-time)Inventory, profile updates
Version-basedStrongStatic assets, API responses, config
Tag-basedStrongCMS content, category-based purging

TTL Guidelines

Data TypeTTLRationale
Static assets (CSS/JS/images)1 year + cache-busting hashImmutable by filename
API config / feature flags30–60 secondsFast propagation needed
User profile data5–15 minutesTolerable staleness
Product catalog1–5 minutesBalance freshness vs load
Session dataMatch session timeoutSecurity requirement

HTTP Caching

Cache-Control Directives

DirectiveMeaning
max-age=NCache for N seconds
s-maxage=NCDN/shared cache max age (overrides max-age)
no-cacheMust revalidate before using cached copy
no-storeNever cache anywhere
must-revalidateOnce stale, must revalidate
privateOnly browser can cache, not CDN
publicAny cache can store
immutableContent will never change (within max-age)
stale-while-revalidate=NServe stale for N seconds while fetching fresh

Common Recipes

# Immutable static assets (hashed filenames)
Cache-Control: public, max-age=31536000, immutable

# API response, CDN-cached, background refresh
Cache-Control: public, s-maxage=60, stale-while-revalidate=300

# Personalized data, browser-only
Cache-Control: private, max-age=0, must-revalidate
ETag: "abc123"

# Never cache (auth tokens, sensitive data)
Cache-Control: no-store

Conditional Requests

MechanismRequest HeaderResponse HeaderHow It Works
ETagIf-None-Match: "abc"ETag: "abc"Hash-based — 304 if match
Last-ModifiedIf-Modified-Since: <date>Last-Modified: <date>Date-based — 304 if unchanged

Prefer ETag over Last-Modified — ETags detect content changes regardless of timestamp granularity.


Application Caching

SolutionSpeedShared Across ProcessesWhen to Use
In-memory LRUFastestNoSingle-process, bounded memory, hot data
RedisSub-ms (network)YesProduction default — TTL, pub/sub, persistence
MemcachedSub-ms (network)YesSimple key-value at extreme scale
SQLiteFast (disk)NoEmbedded apps, edge caching

Redis vs Memcached

FeatureRedisMemcached
Data structuresStrings, hashes, lists, sets, sorted setsStrings only
PersistenceAOF, RDB snapshotsNone
Pub/SubYesNo
Max value size512 MB1 MB
VerdictDefault choicePure cache at extreme scale

Distributed Caching

ConcernSolution
PartitioningConsistent hashing — minimal reshuffling on node changes
ReplicationPrimary-replica — writes to primary, reads from replicas
FailoverRedis Sentinel or Cluster auto-failover

Rule of thumb: 3 primaries + 3 replicas minimum for production Redis Cluster.


Cache Eviction Policies

PolicyHow It WorksWhen to Use
LRUEvicts least recently accessedDefault — general purpose
LFUEvicts least frequently accessedSkewed popularity distributions
FIFOEvicts oldest entrySimple, time-ordered data
TTLEvicts after fixed durationData with known freshness window

Redis default is noeviction. Set maxmemory-policy to allkeys-lru or volatile-lru for production.


Caching Layers

Browser Cache → CDN → Load Balancer → App Cache → DB Cache → Database
LayerWhat to CacheInvalidation
BrowserStatic assets, API responsesVersioned URLs, Cache-Control
CDNStatic files, public API responsesPurge API, surrogate keys
ApplicationComputed results, DB queries, external APIEvent-driven, TTL
DatabaseQuery plans, buffer pool, materialized viewsANALYZE, manual refresh

Cache Stampede Prevention

When a hot key expires, hundreds of requests simultaneously hit the database.

TechniqueHow It Works
Mutex / LockFirst request locks, fetches, populates; others wait
Probabilistic early expirationRandom chance of refreshing before TTL
Request coalescingDeduplicate in-flight requests for same key
Stale-while-revalidateServe stale, refresh asynchronously

Cache Warming

StrategyWhen to Use
On-deploy warm-upPredictable key set, latency-sensitive
Background jobReports, dashboards, catalog data
Shadow trafficCache migration, new infrastructure
Priority-basedLimited warm-up time budget

Cold start impact: A full cache flush can increase DB load 10–100x. Always warm gradually or use stale-while-revalidate.


Monitoring

MetricHealthy RangeAction if Unhealthy
Hit rate> 90%Low → cache too small, wrong TTL, bad key design
Eviction rateNear 0 steady stateHigh → increase memory or tune policy
Latency (p99)< 1ms (Redis)High → network issue, large values, hot key
Memory usage< 80% of maxApproaching max → scale up or tune eviction

NEVER Do

  1. NEVER cache without a TTL or invalidation plan — data rots; every entry needs an expiry path
  2. NEVER treat cache as durable storage — caches evict, crash, and restart; always fall back to source of truth
  3. NEVER cache sensitive data (tokens, PII) without encryption — cache breaches expose everything in plaintext
  4. NEVER ignore cache stampede on hot keys — one expired popular key can take down your database
  5. NEVER use unbounded in-memory caches in production — memory grows until OOM-killed
  6. NEVER cache mutable data with immutable Cache-Control — browsers will never re-fetch
  7. NEVER skip monitoring hit/miss rates — you won't know if your cache is helping or hurting

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…