Go Concurrency Web

v2.3.1

Go concurrency patterns for high-throughput web applications including worker pools, rate limiting, race detection, and safe shared state management. Use whe...

0· 165·1 current·1 all-time
byKevin Anderson@anderskev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for anderskev/go-concurrency-web.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Go Concurrency Web" (anderskev/go-concurrency-web) from ClawHub.
Skill page: https://clawhub.ai/anderskev/go-concurrency-web
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install go-concurrency-web

ClawHub CLI

Package manager switcher

npx clawhub@latest install go-concurrency-web
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description (worker pools, rate limiting, race detection) match the SKILL.md content and reference files. It does not request unrelated credentials, binaries, or config paths.
Instruction Scope
Runtime instructions are limited to reading guidance and running Go tooling (e.g., go test -race, go build -race), inspecting codepaths for bounded concurrency and shutdown. There are no instructions to read unrelated system files, export secrets, or contact unexpected external endpoints.
Install Mechanism
No install spec or code files are present — instruction-only. Nothing is downloaded or written to disk by the skill itself.
Credentials
The skill requests no environment variables or credentials. It references GORACE environment variables only as optional configuration for the race detector (informational), which is proportionate.
Persistence & Privilege
always is false, the skill is user-invocable and may be invoked autonomously (platform default). It does not request persistent system presence or modify other skills or system-wide settings.
Assessment
This is a documentation-style, instruction-only skill that recommends running Go tooling (e.g., go test -race, go build -race) and code review steps. It does not ask for secrets or install code. Before allowing autonomous runs, ensure the agent is allowed to access the repository/workspace where you want tests/builds executed (running -race can increase CPU/memory during CI), and verify any CI/test output the agent cites when it claims "no races."

Like a lobster shell, security has layers — review code before you run it.

latestvk97as04kfpxez57jy09s3jgtj185a02p
165downloads
0stars
2versions
Updated 6d ago
v2.3.1
MIT-0

Go Concurrency for Web Applications

Quick Reference

TopicReference
Worker Pools & errgroupreferences/worker-pools.md
Rate Limitingreferences/rate-limiting.md
Race Detection & Fixesreferences/race-detection.md

Core Rules

  1. Goroutines are cheap but not free — each goroutine consumes ~2-8 KB of stack. Unbounded spawning under load leads to OOM.
  2. Always have a shutdown path — every goroutine you start must have a way to exit. Use context.Context, channel closing, or sync.WaitGroup.
  3. Prefer channels for communication — use channels to coordinate work between goroutines and signal completion.
  4. Use mutexes for state protection — when goroutines share mutable state, protect it with sync.Mutex, sync.RWMutex, or sync/atomic.
  5. Never spawn raw goroutines in HTTP handlers — use worker pools, errgroup, or other bounded concurrency primitives.

Gates (check before merge or review)

Use these sequenced checks for objective pass/fail; do not replace them with “I verified mentally.”

  1. Race detector
    • Run go test -race ./... on packages that changed concurrent code, or go build -race for binaries under test.
    • Pass: exit code 0. If you report “no races,” attach or cite CI output / saved terminal transcript—do not assert cleanliness without that artifact.
  2. Bounded background work from HTTP
    • Inspect handlers and middleware that start work beyond the request goroutine.
    • Pass: every such path uses a bounded primitive (worker pool, buffered channel with documented capacity, errgroup with an explicit concurrency cap)—not unbounded go per incoming request.
  3. Graceful teardown
    • For processes that start long-lived goroutines, trace from shutdown signal (or test defer) to Wait() / channel close / context cancel for each goroutine family.
    • Pass: you can point to the call chain or a test that proves shutdown completes without hang (no orphan goroutines).

Worker Pool Pattern

Use worker pools for background tasks dispatched from HTTP handlers. This bounds concurrency and provides graceful shutdown.

// Worker pool for background tasks (e.g., sending emails)
type WorkerPool struct {
    jobs   chan Job
    wg     sync.WaitGroup
    logger *slog.Logger
}

type Job struct {
    ID      string
    Execute func(ctx context.Context) error
}

func NewWorkerPool(numWorkers int, queueSize int, logger *slog.Logger) *WorkerPool {
    wp := &WorkerPool{
        jobs:   make(chan Job, queueSize),
        logger: logger,
    }

    for i := 0; i < numWorkers; i++ {
        wp.wg.Add(1)
        go wp.worker(i)
    }

    return wp
}

func (wp *WorkerPool) worker(id int) {
    defer wp.wg.Done()
    for job := range wp.jobs {
        wp.logger.Info("processing job", "worker", id, "job_id", job.ID)
        if err := job.Execute(context.Background()); err != nil {
            wp.logger.Error("job failed", "worker", id, "job_id", job.ID, "err", err)
        }
    }
}

func (wp *WorkerPool) Submit(job Job) {
    wp.jobs <- job
}

func (wp *WorkerPool) Shutdown() {
    close(wp.jobs)
    wp.wg.Wait()
}

Usage in HTTP Handler

func (s *Server) handleCreateUser(w http.ResponseWriter, r *http.Request) {
    user, err := s.userService.Create(r.Context(), decodeUser(r))
    if err != nil {
        handleError(w, r, err)
        return
    }

    // Dispatch background task — never spawn raw goroutines in handlers
    s.workers.Submit(Job{
        ID: "welcome-email-" + user.ID,
        Execute: func(ctx context.Context) error {
            return s.emailService.SendWelcome(ctx, user)
        },
    })

    writeJSON(w, http.StatusCreated, user)
}

See references/worker-pools.md for sizing guidance, backpressure, error handling, retry patterns, and errgroup as a simpler alternative.

Rate Limiting

Use golang.org/x/time/rate for token bucket rate limiting. Apply as middleware for global limits or per-IP/per-user limits.

Key points:

  • Global rate limiting protects overall service capacity
  • Per-IP rate limiting prevents individual clients from monopolizing resources
  • Always return 429 Too Many Requests with a Retry-After header

See references/rate-limiting.md for middleware implementation, per-IP limiting, stale limiter cleanup, and API key-based limiting.

Race Detection

Run the race detector in development and CI:

go test -race ./...
go build -race -o myserver ./cmd/server

The race detector catches concurrent reads and writes to shared memory. It does not catch logical races (e.g., TOCTOU bugs) or deadlocks.

See references/race-detection.md for common web handler races, fixing strategies, and CI integration.

Handler Safety

Every incoming HTTP request runs in its own goroutine. Any shared mutable state on the server struct is a potential data race.

// BAD — shared state without protection
type Server struct {
    requestCount int // data race!
}

func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
    s.requestCount++ // concurrent writes = race condition
}

// GOOD — use atomic or mutex
type Server struct {
    requestCount atomic.Int64
}

func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
    s.requestCount.Add(1)
}

// GOOD — use mutex for complex state
type Server struct {
    mu    sync.RWMutex
    cache map[string]*CachedItem
}

func (s *Server) handleGetCached(w http.ResponseWriter, r *http.Request) {
    s.mu.RLock()
    item, ok := s.cache[r.PathValue("key")]
    s.mu.RUnlock()
    // ...
}

Rules for Handler Safety

  • Request-scoped data is safer.Context(), request body, URL params are isolated per request.
  • Server struct fields are shared — any field on *Server accessed by handlers needs synchronization.
  • Database connections are safe*sql.DB manages its own connection pool with internal locking.
  • Maps are not safe — use sync.Map or protect with a mutex.
  • Slices are not safe — concurrent append or read/write requires a mutex.

Anti-Patterns

Unbounded goroutine spawning

// BAD — no limit on concurrent goroutines
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
    go func() {
        // What if 10,000 requests arrive at once?
        s.processWebhook(r.Context(), decodeWebhook(r))
    }()
    w.WriteHeader(http.StatusAccepted)
}

// GOOD — use a worker pool
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
    webhook := decodeWebhook(r)
    s.workers.Submit(Job{
        ID:      "webhook-" + webhook.ID,
        Execute: func(ctx context.Context) error {
            return s.processWebhook(ctx, webhook)
        },
    })
    w.WriteHeader(http.StatusAccepted)
}

Forgetting to propagate context

// BAD — loses cancellation signal
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
    results, err := s.search(context.Background(), r.URL.Query().Get("q"))
    // ...
}

// GOOD — use request context
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
    results, err := s.search(r.Context(), r.URL.Query().Get("q"))
    // ...
}

Goroutine leak from missing channel receiver

// BAD — goroutine blocks forever if nobody reads the channel
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
    ch := make(chan *Response)
    go func() {
        resp, _ := http.Get(url) // blocks forever if ctx cancels
        ch <- resp               // stuck here if nobody reads
    }()
    select {
    case resp := <-ch:
        return resp, nil
    case <-ctx.Done():
        return nil, ctx.Err() // goroutine leaked!
    }
}

// GOOD — use buffered channel so goroutine can exit
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
    ch := make(chan *Response, 1) // buffered — goroutine can always send
    go func() {
        resp, _ := http.Get(url)
        ch <- resp
    }()
    select {
    case resp := <-ch:
        return resp, nil
    case <-ctx.Done():
        return nil, ctx.Err()
    }
}

Using time.Sleep for coordination

// BAD — sleeping to wait for goroutines
go doWork()
time.Sleep(5 * time.Second) // hoping it finishes

// GOOD — use sync primitives
var wg sync.WaitGroup
wg.Add(1)
go func() {
    defer wg.Done()
    doWork()
}()
wg.Wait()

Comments

Loading comments...