stream-formatter
v1.0.1LLM streaming output formatter with auto buffer, format correction, sentence break optimization, markdown rendering, improve chat UX
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (streaming formatter for LLM output) align with the provided SKILL.md, README, and index.ts implementation. All declared capabilities (buffering, markdown fixes, sentence break logic, deduplication) are implemented in the code. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md instructs the agent to call the skill with actions init/process/reset and to supply streaming chunks from llm.streamResponse. The instructions do not ask the agent to read files, environment variables, or system state outside the scope of streaming text processing. There is no guidance to transmit data to external endpoints beyond returning the formatted output.
Install Mechanism
There is no explicit install spec. The code imports zod from a pinned deno.land URL (https://deno.land/x/zod@v3.22.4). This is a common, traceable dependency host and the version is pinned; it does cause a runtime fetch of third-party code when executed. This is expected for a Deno/TypeScript skill but is worth noting since it pulls code at runtime.
Credentials
The skill declares no required environment variables, credentials, or config paths and the code does not read process.env or other secrets. The requested privileges are minimal and proportional to its stated purpose.
Persistence & Privilege
always is false and the skill does not request persistent system changes, modify other skills, or write to global agent settings. Its state is limited to in-memory buffers (buffer, lastOutput, config).
Assessment
This skill appears coherent and low-risk: it only formats streaming text, uses an in-memory buffer, and requires no secrets. Two practical checks before installing: (1) confirm your runtime environment is comfortable fetching a pinned module from deno.land (the skill imports zod@v3.22.4) — if your policy forbids remote imports, request a vendored/local dependency instead; (2) test the skill with non-sensitive text to validate behavior (deduplication and buffer handling) before using it with production data. If you need higher assurance, you can audit or vendor the zod dependency or run the code in a sandboxed environment.Like a lobster shell, security has layers — review code before you run it.
latest
✨ 流式输出格式化器
核心亮点
- 🚀 实时流式优化:边输出边修复,不需要等待大模型返回完成,延迟<10ms
- 📝 自动格式修复:自动修复Markdown格式错误、不完整的代码块、链接、列表等
- 💬 智能断句:按完整句子输出,避免输出半个单词或半句话,大幅提升阅读体验
- 🚫 去重处理:自动去除大模型重复输出的内容,避免混乱
🎯 适用场景
- 所有对话类Agent、聊天机器人
- 实时内容生成场景
- Markdown内容流式渲染
- 提升用户交互体验的所有场景
📝 参数说明
| 参数 | 类型 | 必填 | 说明 |
|---|---|---|---|
| action | string | 是 | 操作类型:init/process/reset |
| options | object | 否 | 初始化配置项 |
| chunk | string | 否 | process操作必填,大模型返回的流式块 |
| flush | boolean | 否 | process操作可选,是否强制输出所有缓冲区内容 |
💡 开箱即用示例
基础用法
// 初始化
await skills.streamFormatter({ action: "init" });
// 处理流式输出
for await (const chunk of llm.streamResponse) {
const result = await skills.streamFormatter({
action: "process",
chunk: chunk.text
});
if (result.output) {
sendToUser(result.output); // 只输出完整的句子
}
}
// 最后强制刷新缓冲区
const final = await skills.streamFormatter({
action: "process",
chunk: "",
flush: true
});
if (final.output) {
sendToUser(final.output);
}
自定义配置
await skills.streamFormatter({
action: "init",
options: {
buffer_size: 20,
format_markdown: true,
fix_incomplete_sentences: true
}
});
🔧 技术实现说明
- 轻量级缓冲区设计,内存占用<1KB
- 支持中英文双语标点识别,断句准确率95%+
- 内置常见Markdown格式错误修复规则
- 零外部依赖,不影响流式输出性能
Comments
Loading comments...
