Model Rate Limiter

v1.0.0

限制大模型请求频率,每分钟不超指定次数(默认5次)。当用户提到"限制请求频率"、"限速"、"模型限流"、"每分钟不超过X次"时触发。

0· 85·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mazhimin-123/model-rate-limiter.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Model Rate Limiter" (mazhimin-123/model-rate-limiter) from ClawHub.
Skill page: https://clawhub.ai/mazhimin-123/model-rate-limiter
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install model-rate-limiter

ClawHub CLI

Package manager switcher

npx clawhub@latest install model-rate-limiter
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (rate limiting model requests) match the instructions: the SKILL.md only requires maintaining a local JSON state file and enforcing a per-minute counter. No unrelated binaries, credentials, or network access are requested.
Instruction Scope
Runtime instructions are narrow and self-contained: read/modify {workspace}/rate-limit-state.json before model calls. It does not instruct reading other files, environment variables, or sending data to external endpoints. The only side effect is local file I/O in the workspace.
Install Mechanism
No install spec and no code files — nothing will be downloaded or written at install time. This is low-risk and appropriate for an instruction-only skill.
Credentials
The skill requests no environment variables, credentials, or config paths beyond a single workspace-state file. That level of access is proportional to the stated purpose.
Persistence & Privilege
always:false (default) and no attempts to modify other skills or system-wide config. The skill persists state only in its own workspace JSON file; autonomous invocation is allowed (platform default) but not elevated here.
Assessment
This skill is straightforward and appears safe: it keeps rate-limit state in {workspace}/rate-limit-state.json and asks you to enable it if you want rate limiting (default is disabled). Before installing, confirm what {workspace} resolves to in your agent (so the file won’t be written to an unexpected location), ensure the agent has only the intended write permission to that workspace, and consider how concurrent agent runs will handle file updates (race conditions). If you need stronger guarantees, ask for explicit file-path validation, atomic update behavior, or central storage designed for concurrent access. Otherwise it's reasonable to install.

Like a lobster shell, security has layers — review code before you run it.

latestvk978s3y735j0sgwepjjxndfqrx84vsr0
85downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Model Rate Limiter

限制 AI 模型请求频率,防止触发 API 限速。

配置文件

状态文件:{workspace}/rate-limit-state.json

{
  "enabled": false,
  "maxPerMinute": 5,
  "windowMs": 60000,
  "timestamps": []
}

参数说明

参数说明默认值
enabled开关:true开启,false关闭false
maxPerMinute每分钟最大请求次数5
windowMs时间窗口(毫秒)60000

使用方式

查看当前状态

读取 rate-limit-state.json 文件。

开启限速

enabled 改为 true

关闭限速

enabled 改为 false

修改限制次数

修改 maxPerMinute 值,例如设为 3:

{ "maxPerMinute": 3 }

手动重置计数

timestamps 设为空数组 [] 即可。

工作原理

每次模型请求前:

  1. 读取 rate-limit-state.json
  2. enabled: false,直接放行
  3. enabled: true,清理 windowMs 外的旧时间戳
  4. 若剩余次数 < 1,拒绝请求并提示等待
  5. 否则写入当前时间戳,允许请求

示例对话

用户:开启限速
操作:将 enabled 改为 true,回复"已开启限速,每分钟不超5次"

用户:改成每分钟3次
操作:修改 maxPerMinute: 3,回复"已调整为每分钟3次"

Comments

Loading comments...