prompt-engineer
PassAudited by ClawScan on May 6, 2026.
Overview
This is an instruction-only prompt-engineering skill with purpose-aligned examples; the main thing to notice is that copied example code may send prompts or documents to OpenAI and store RAG context.
This skill appears safe to install as an instruction-only prompt-engineering helper. If you copy its example code, review where prompts and documents are sent or stored, especially when using OpenAI APIs or vector databases with private data.
Findings (2)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If you reuse this example with private prompts, test cases, or documents, that content may be sent to an external AI provider.
The example sends prompt content to the OpenAI API. This is expected for prompt-engineering examples, but copied code may transmit user-provided text to an external provider.
response = openai.ChatCompletion.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],Before adapting the example, confirm provider data-handling terms, avoid sending sensitive data unless approved, and add clear user consent for external API calls.
If copied without care, private or untrusted documents could be indexed and later influence answers.
The RAG example stores document chunks in a vector store for later retrieval. This is core to RAG, but users should understand what documents are indexed and reused as context.
self.vector_store = Chroma.from_texts(
texts=chunks,
embedding=self.embeddings,
metadatas=metadatas
)Scope RAG ingestion to approved documents, separate trusted and untrusted sources, retain citations, and define retention/deletion behavior for stored embeddings.
