Expanso pii-redact
v1.0.0Redact personally identifiable information from text by replacing sensitive data with placeholders using Expanso Edge pipelines.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name, README, SKILL.md, and pipeline YAMLs consistently implement a PII redaction pipeline that uses an LLM backend (OpenAI or local Ollama). The pipelines, inputs, outputs, and declared backends are coherent with the described purpose.
Instruction Scope
Runtime instructions are narrowly scoped to running the expanso-edge pipelines (CLI or MCP HTTP endpoint) and optionally deploying to Expanso Cloud. The pipelines send the input text to an LLM (openai_chat_completion) and expect a JSON response with redacted_text/redaction_count/redacted_types. Important privacy notes: (1) when run as provided the text is sent to OpenAI (remote) unless you run a local Ollama backend; (2) running the MCP pipeline exposes an HTTP endpoint that will accept arbitrary text and forward it to the model; (3) metadata (input_hash, input_length, trace_id, timestamp) is computed and included in logs/outputs. These are expected behaviors for this skill but are relevant privacy considerations and should be reviewed before use or deployment to a remote/cloud endpoint.
Install Mechanism
This is an instruction-only skill with no install spec and no downloads or extracted artifacts. That yields a low install risk: nothing is written or executed on install beyond the user running expanso-edge on their machine.
Credentials
The skill needs an OpenAI API key to use the remote backend (OPENAI_API_KEY), and skill.yaml correctly declares this credential (optional). However, registry-level metadata included with the submission indicated "Required env vars: none", which conflicts with the bundled README/pipelines that reference ${OPENAI_API_KEY}. The pipelines will attempt to resolve ${OPENAI_API_KEY} at runtime; if you intend to keep data local, use the local Ollama backend instead of OpenAI or avoid deploying to cloud. No unrelated credentials or surprising env variables are requested.
Persistence & Privilege
The skill does not request persistent/system-wide privileges. Flags show always:false (no forced inclusion) and default autonomous invocation allowed. The skill does not modify other skills' configs or ask to persist credentials on its own in the provided files.
Assessment
This skill appears to be what it claims: a PII-redaction pipeline that sends text to an LLM and returns a redacted result. Before installing or running it, consider: 1) Privacy: by default the pipeline uses the OpenAI remote backend and will send your input text to OpenAI; if you must keep data local, run a local Ollama model or avoid the remote backend. 2) Deployment: starting the MCP server exposes an HTTP endpoint (0.0.0.0:${PORT}) that will accept arbitrary text — only bind it to trusted networks or add access controls. 3) Metadata/logging: the pipeline computes and logs input_hash, trace_id, and redaction counts; make sure those metadata fields meet your compliance needs. 4) Small inconsistency: the registry metadata said no required env vars, but the pipeline/README reference OPENAI_API_KEY — decide whether you'll provide that key or rely on local models. 5) Test with non-sensitive data first to confirm behavior and model outputs. If you need a stricter privacy posture, prefer the local Ollama backend or run the pipeline in an isolated environment and do not deploy to Expanso Cloud.Like a lobster shell, security has layers — review code before you run it.
latest
pii-redact
"Redact PII from text, replacing sensitive data with placeholders"
Requirements
- Expanso Edge installed (
expanso-edgebinary in PATH) - Install via:
clawhub install expanso-edge
Usage
CLI Pipeline
# Run standalone
echo '<input>' | expanso-edge run pipeline-cli.yaml
MCP Pipeline
# Start as MCP server
expanso-edge run pipeline-mcp.yaml
Deploy to Expanso Cloud
expanso-cli job deploy https://skills.expanso.io/pii-redact/pipeline-cli.yaml
Files
| File | Purpose |
|---|---|
skill.yaml | Skill metadata (inputs, outputs, credentials) |
pipeline-cli.yaml | Standalone CLI pipeline |
pipeline-mcp.yaml | MCP server pipeline |
Comments
Loading comments...
