Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ragie.ai-RAG

v1.0.2

Execute Retrieval-Augmented Generation (RAG) using Ragie.ai. Use this skill whenever the user wants to: - Search their knowledge base - Ask questions about u...

0· 562·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for hatim-be/ragie-rag.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ragie.ai-RAG" (hatim-be/ragie-rag) from ClawHub.
Skill page: https://clawhub.ai/hatim-be/ragie-rag
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ragie-rag

ClawHub CLI

Package manager switcher

npx clawhub@latest install ragie-rag
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (Ragie.ai RAG) align with the included scripts: ingest.py, manage.py, and retrieve.py implement ingestion, listing/status/delete, and retrieval against https://api.ragie.ai. Requiring a single API key (RAGIE_API_KEY) and python is consistent with the stated purpose. However the registry-level summary at the top of the submission (Required env vars: none, Primary credential: none) contradicts the SKILL.md and the scripts which both require RAGIE_API_KEY. This mismatch in published metadata vs. actual runtime requirements is an inconsistency that should be resolved.
Instruction Scope
SKILL.md gives explicit, narrow instructions to run the included Python scripts for ingestion, management, and retrieval. The scripts only access user-provided files/URLs and the RAGIE API, and do not attempt to read unrelated system files. They use dotenv (so they will load a .env file if present) and will POST files or JSON to api.ragie.ai as expected by the skill's purpose.
Install Mechanism
No install spec is provided (instruction-only install), and the code is shipped as plain Python scripts. The scripts depend on python3 plus two Python packages (requests, python-dotenv) as declared in SKILL.md metadata; however no automated install is provided and the registry summary did not list these. This is low risk functionally but operationally you'll need to ensure the runtime has python3 and the required packages installed.
Credentials
The only secret the skill needs is RAGIE_API_KEY, which is proportionate to a RAG API integration. The scripts load environment variables via python-dotenv (load_dotenv), so they may read a local .env file; this is standard but you should ensure .env is not committed. The main proportional concern is the metadata inconsistency: the registry reported no required env/credentials while the skill actually requires the API key.
Persistence & Privilege
The skill does not request permanent presence (always: false) and does not modify other skills or system-wide settings. It only executes on invocation and runs CLI scripts that interact with Ragie. No elevated privileges are requested.
What to consider before installing
What to check before installing: - Metadata mismatch: the registry summary claims no required env/credentials but the SKILL.md and bundled scripts require RAGIE_API_KEY and python3. Confirm which metadata is authoritative and ask the publisher to fix the registry entry before installing. - Secrets: the scripts will send uploaded file contents and metadata to https://api.ragie.ai using whatever RAGIE_API_KEY you provide. Only use an API key you trust to grant that service access to your documents; avoid ingesting secrets or PII unless you trust Ragie. - Local .env: the scripts call load_dotenv() so a local .env file can supply the key. Ensure you don't commit .env to source control and keep the key rotated if compromised. - Dependencies: the package expects requests and python-dotenv. There is no automated installer; ensure the execution environment has python3 and these packages (pip install requests python-dotenv) or run the scripts in an isolated environment. - Inspect & control ingress: ingest.py opens user-specified file paths and posts them to Ragie; review and sanitize any files you plan to upload. Consider running the scripts locally rather than granting broad agent-level execution if you have sensitive data. - If you want higher assurance: ask the publisher to correct registry metadata, provide a reproducible install spec (or a vetted package), and sign the release. If those fixes are made, the skill appears coherent and appropriate for RAG use. Confidence note: medium — the code and instructions are consistent with the described purpose, but the contradictory registry metadata reduces confidence. If registry metadata is corrected to declare RAGIE_API_KEY and python3/requests/python-dotenv, this would increase confidence to high.

Like a lobster shell, security has layers — review code before you run it.

latestvk9756mphtbtf3twr3v0sywc1j581r182
562downloads
0stars
3versions
Updated 14h ago
v1.0.2
MIT-0

Ragie.ai RAG Skill (OpenClaw Optimized)

This skill enables grounded question answering using Ragie.ai as a RAG backend.

Ragie handles:

  • Document chunking
  • Embedding
  • Vector indexing
  • Retrieval
  • Optional reranking

The agent handles:

  • Deciding when to ingest
  • Triggering retrieval
  • Constructing grounded prompts
  • Producing final answers

Core Principles

  1. Never answer without retrieval.
  2. Never hallucinate information not present in retrieved chunks.
  3. Always cite the document_name when referencing specific facts.
  4. If retrieval returns zero relevant chunks, explicitly say:

    "I don't have that information in the current knowledge base."

  5. Do not expose API keys or raw API payloads in final answers.

Deterministic Workflow

Case A — User Provides a File or URL

IF the user provides:

  • A file
  • A document path
  • A PDF/URL to ingest

THEN:

  1. Execute ingestion:

    python `skills/scripts/ingest.py` --file <path> --name "<document_name>"
    

    OR

    python `skills/scripts/ingest.py` --url "<url>" --name "<document_name>"
    
  2. Capture returned document_id.

  3. Poll document status:

    python `skills/scripts/manage.py` status --id <document_id>
    

    Repeat until status == ready.

  4. Proceed to Retrieval (Case C).


Case B — User Requests Document Management

List documents

python `skills/scripts/manage.py` list

Check document status

python `skills/scripts/manage.py` status --id <document_id>

Delete a document

python `skills/scripts/manage.py` delete --id <document_id>

Return structured results to the user.


Case C — Retrieval (Grounded Question Answering)

Execute:

python `skills/scripts/retrieve.py` \
  --query "<user_question>" \
  --top-k 6 \
  --rerank

Optional flags:

  • --partition <name>
  • --filter '{"key":"value"}'

Retrieval Output Format

Expected output:

[
  {
    "text": "...",
    "score": 0.87,
    "document_name": "Policy Handbook",
    "document_id": "doc_abc123"
  }
]

Grounded Prompt Construction

After retrieval:

  1. Extract all chunk text.
  2. Concatenate with separators.
  3. Construct this prompt:
SYSTEM:
You are a helpful assistant.
Answer using ONLY the context provided below.
If the context does not contain the answer, say:
"I don't have that information in the current knowledge base."

CONTEXT:
[chunk 1 text]
---
[chunk 2 text]
---
...

USER QUESTION:
{original user question}
  1. Generate final answer.
  2. Cite document_name when referencing information.

Output Contract

The final response MUST:

  • Be grounded only in retrieved chunks
  • Cite document_name for factual claims
  • Avoid hallucinations
  • Avoid mentioning internal execution steps
  • Avoid exposing API keys or raw responses
  • Clearly state when information is missing

If no chunks are returned:

I don't have that information in the current knowledge base.

API Reference

Base URL:

https://api.ragie.ai
OperationMethodEndpoint
Ingest filePOST/documents
Ingest URLPOST/documents/url
Retrieve chunksPOST/retrievals
List documentsGET/documents
Get documentGET/documents/{id}
Delete documentDELETE/documents/{id}

Error Handling

HTTP CodeMeaningAction
404Document not foundVerify document_id
422Invalid payloadValidate request schema
429Rate limitedRetry with backoff
5xxServer errorRetry or check Ragie status

If ingestion fails:

  • Report failure clearly.
  • Do not proceed to retrieval.

If retrieval fails:

  • Retry once.
  • If still failing, inform user.

Decision Rules Summary

  1. If user uploads content → ingest → wait until ready → retrieve.
  2. If user asks question → retrieve immediately.
  3. If zero chunks → state knowledge gap.
  4. Always use reranking unless explicitly disabled.
  5. Never answer without retrieval.

Advanced Usage

  • Use metadata filter to narrow retrieval scope.
  • Use partitions to separate tenant data.
  • Use recency_bias only when time relevance matters.
  • Adjust top_k depending on query complexity.

Security

  • API keys must be loaded from environment variables.
  • .env must not be committed.
  • Do not log sensitive headers.

Summary

This skill provides:

  • Deterministic ingestion
  • Deterministic retrieval
  • Strict grounded answering
  • Complete Ragie lifecycle management
  • Safe and hallucination-resistant RAG execution

End of Skill.

Comments

Loading comments...