Prompt Safe
v1.0.4Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.
⭐ 4· 2.2k·10 current·11 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name, description, SKILL.md and the included Python implementation all describe the same functionality (two-phase prompt assembly, memory retrieval, token safety). The skill does not request unrelated binaries, environment variables, or config paths — the declared requirements are proportionate to the stated purpose.
Instruction Scope
Instructions are narrowly focused on assembling prompts and memory handling. They do instruct you to copy the provided script into your agent and call its build() API, which is expected. Two points to review: (1) a pre-scan flag indicates 'system-prompt-override' patterns in SKILL.md — while the doc mostly says 'Never downgrade system prompt', the scanner flagged content that could be used for prompt injection strategies and should be manually inspected, and (2) the memory policy explicitly recommends storing user identity, timezone, and similar PII; that is legitimate for memory systems but raises privacy considerations and should be constrained to your data-retention rules.
Install Mechanism
There is no install spec and no downloads; the skill is instruction-only plus a Python file. That is low-risk from an install perspective because nothing external is pulled in at install time. The code would be copied into the agent's codebase when used, so standard code-audit precautions apply.
Credentials
The skill requests no environment variables or credentials. Its memory guidelines permit storing personal data (name, timezone, preferences), which is functionally reasonable for a memory system but requires you to ensure appropriate access controls and retention policies; nothing in the skill asks for unrelated secrets or cloud credentials.
Persistence & Privilege
always is false and the skill does not demand persistent platform privileges. It suggests copying code into your agent (normal). It does not attempt to modify other skills or system-wide settings in the provided materials.
Scan Findings in Context
[system-prompt-override] unexpected: The regex scanner detected patterns associated with system-prompt manipulation in SKILL.md. The doc also contains statements like 'Never downgrade system prompt' and inserts '[System Notice]' when memory is skipped. This could be a false positive (policy text referencing the system prompt), but because prompt-injection techniques often reference system prompts, this should be manually reviewed to ensure there are no directives that would allow the skill to alter or override platform/system prompts at runtime.
What to consider before installing
What to check before installing or using this skill:
1) Audit the code before copying it into any agent. The provided script appears truncated in the packaged file (ends with 'return ful…'), which will cause runtime errors and could be a sign of accidental corruption or tampering. Ensure the build() method returns the assembled prompt (e.g., the full_text or assembled string) and run unit tests with representative inputs.
2) Manually inspect SKILL.md for any phrases that try to change system-level prompts or inject instructions beyond assembling prompts. The scanner flagged a 'system-prompt-override' pattern — this may be a false positive, but verify that no text attempts to override or stealthily alter the agent’s system prompt or control flow.
3) Review memory storage policy for privacy implications. The skill explicitly recommends storing PII-like items (name, timezone, preferences). If you will persist memory, ensure your memory backend enforces encryption, access control, and retention/erasure policies appropriate for PII.
4) Resolve inconsistencies in token-safety settings. The SKILL.md and references disagree on recommended safety margins (0.75 vs 0.85), and the token-estimation heuristics are approximate. Decide on a single safety margin for your deployment and, if your application runs near model limits, prefer an exact BPE estimator (tiktoken or equivalent).
5) Test in a sandbox with mocked get_recent_dialog_fn and memory_search_fn to confirm behavior: ensure no unexpected network calls, no logging of sensitive content to external endpoints, and that the safety valve behaves as documented (skips memory but preserves system prompt and user input).
6) If you lack the ability to audit Python code yourself, don't deploy this into agents that handle sensitive data until a trusted reviewer has validated the implementation and fixed the truncated/broken return. After fixes, re-run static analysis and unit tests.
If you want, I can: (a) point out the exact lines in the Python file that look broken and propose a patch to fix the truncated return, (b) search the SKILL.md text for phrases that could be misused to attempt system-prompt changes, or (c) produce a minimal test harness to validate behavior safely.Like a lobster shell, security has layers — review code before you run it.
latestvk975xfn83k5a1tjgj9wy2tr1zn80g6ebmemoryvk975xfn83k5a1tjgj9wy2tr1zn80g6ebprompt-engineeringvk975xfn83k5a1tjgj9wy2tr1zn80g6ebtoken-safetyvk975xfn83k5a1tjgj9wy2tr1zn80g6eb
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
