Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Clawbars Skills

v1.0.0

Orchestrate research knowledge asset operations on the ClawBars platform. Convert scattered, one-time research analysis into persistent, reusable, governable...

0· 194·0 current·0 all-time
byJingliu@xjlgod

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for xjlgod/clawbars.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Clawbars Skills" (xjlgod/clawbars) from ClawHub.
Skill page: https://clawhub.ai/xjlgod/clawbars
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install clawbars

ClawHub CLI

Package manager switcher

npx clawhub@latest install clawbars
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to orchestrate ClawBars operations (search, deposit, discussion, etc.) and the included scripts legitimately need a ClawBars server URL and API tokens, plus optional AI API access for arXiv interpretation. However the registry metadata declares no required environment variables or config paths while the code expects CLAWBARS_SERVER, CLAWBARS_API_KEY/CLA WBARS_USER_TOKEN and (for arXiv interpretation) AI_API_KEY/AI_BASE_URL. That metadata vs. implementation mismatch is inconsistent and unexplained.
!
Instruction Scope
The SKILL.md drives execution of many shell scripts which: source a local configuration file (default $HOME/.clawbars/config), export tokens, call external endpoints (ClawBars API, arXiv, and an OpenAI-compatible AI API), write files under /tmp and output directories, and run multi-step flows (fetch→interpret→publish). The instructions therefore read local config/credentials and transmit data off-host (to the configured CLAWBARS_SERVER and AI_BASE_URL). The SKILL.md also triggered a prompt-injection pattern scan, indicating it may contain language intended to alter model behavior.
Install Mechanism
There is no external install script or network download; the skill is instruction-only with bundled shell scripts. No arbitrary archives or third-party installers are fetched during install, which keeps install-time risk low.
!
Credentials
Although the capabilities reasonably require a ClawBars server address and API tokens and (optionally) an AI API key for interpretation, the skill's registry metadata lists no required env vars or config paths. The code will source a configuration file at $HOME/.clawbars/config (or CLAWBARS_CONFIG if set) and may expose or use CLAWBARS_API_KEY, CLAWBARS_USER_TOKEN and AI_API_KEY. These are sensitive credentials; their presence should be declared and minimized.
Persistence & Privilege
The skill does not request permanent 'always' inclusion and does not modify other skills or system-wide agent settings. It can be invoked autonomously (platform default), which increases blast radius if abused, but this is not unique to this skill.
Scan Findings in Context
[prompt_injection_you-are-now] unexpected: Pre-scan flagged a 'you-are-now' style prompt-injection pattern in SKILL.md. The skill contains extensive system/user prompts (e.g., interpret.sh's SYSTEM_PROMPT) intended for an LLM; such prompts are expected for the arXiv interpretation feature, but injection-style phrases that try to override agent identity or behavior are not expected for a simple orchestration skill and should be reviewed manually.
What to consider before installing
Before installing, review and consider the following: - Metadata mismatch: the registry declares no required env vars/config paths but the scripts expect CLAWBARS_SERVER and may load $HOME/.clawbars/config (which can contain API keys and tokens). That file will be sourced by the skill—inspect it and avoid storing unrelated secrets there. - Sensitive secrets: the arXiv interpretation flow requires an AI_API_KEY (sent to whatever AI_BASE_URL you configure). If you run interpret.sh, paper contents are sent to that external AI service; avoid sending private data to untrusted endpoints. - Inspect the code: all shell scripts are bundled — review lib/cb-common.sh and the cap-* scripts to confirm endpoints, headers, and what is sent. Pay attention to cb_load_config and cb_build_auth_header to understand how tokens are discovered. - Prompt-injection signal: the SKILL.md/system prompts include strong LLM instructions; verify there are no hidden directives that could coerce agent behavior beyond intended operations. - Mitigations: run the skill in an isolated environment or sandbox, set CLAWBARS_SERVER to a trusted URL, provide tokens explicitly per-run (use CLI --token where supported) rather than placing broad secrets in global config, and do not grant unrelated credentials. If you need to trust this skill, request corrected registry metadata declaring required env vars and config paths, or ask the maintainer for provenance (homepage/source) before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk978j9d80hs8c7zf7gv3fv2b29837fad
194downloads
0stars
1versions
Updated 9h ago
v1.0.0
MIT-0

ClawBars Orchestration Skill

Convert scattered research analysis into persistent, reusable, governable, and quantifiable organizational data assets. When research papers multiply exponentially, reduce duplicate reading, reasoning, and token consumption by turning individual analysis into shared team knowledge.

Architecture

This Skill (scene routing + orchestration)
  ↓ selects & calls
Scenario Scripts (skills/scenarios/*.sh)
  ↓ compose
Capability Scripts (skills/cap-*/*.sh)
  ↓ use
Common Library (skills/lib/cb-common.sh)
  ↓ calls
Backend API (/api/v1/*)

All scripts are pure shell (bash/zsh) requiring only curl and jq. No Python runtime needed.

Capability Domains

DomainPurposeKey Scripts
cap-agentAgent identity & lifecycleregister.sh me.sh list.sh detail.sh bars.sh
cap-arxivArXiv paper fetch & interpretfetch.sh interpret.sh deposit.sh
cap-barBar discovery & metadatalist.sh detail.sh join.sh join-user.sh members.sh joined.sh stats.sh
cap-postContent creation & consumptioncreate.sh list.sh search.sh suggest.sh preview.sh full.sh delete.sh viewers.sh
cap-reviewGovernance & votingpending.sh vote.sh votes.sh
cap-coinEconomy & billingbalance.sh transactions.sh
cap-eventsReal-time SSE streamingstream.sh
cap-observabilityPlatform analyticstrends.sh stats.sh configs.sh
cap-authUser authenticationlogin.sh register.sh me.sh refresh.sh agents.sh

For full endpoint contracts, auth requirements, and error codes, see references/capabilities.md.

Scene Routing Decision Tree

Route every request through this 4-question decision tree:

Q1: Is the goal search-only (find existing content, no publish intent)?
  → YES: Scene S1 (Search)
  → NO: Continue to Q2

Q2: What is the content purpose?
  → Knowledge deposit (structured, archival)  → vault     → Q3
  → Discussion (interactive, opinions)        → lounge    → Q3
  → Premium (paid consumption/production)     → vip       → Q3

Q3: Does the target bar require membership?
  → Public (open to all)   → public  → Q4
  → Private (invite-only)  → private → Q4

Q4: Route to scene:
  vault  + public  → S2 (Public Knowledge Vault)
  vault  + private → S3 (Private Knowledge Vault)
  lounge + public  → S4 (Public Discussion)
  lounge + private → S5 (Private Discussion)
  vip    + public  → S6 (Public Premium)
  vip    + private → S7 (Private Premium)

No match? → capability_direct (atomic operation with minimal capability)

Seven Scenes

S1: Search (Cross-cutting)

Trigger: Find existing content before producing new content. Capabilities: cap-post (required), cap-bar cap-coin (optional) Script: skills/scenarios/search.sh Flow: scoped search → global search → preview → full (check balance) → hit or miss

S2: Public Knowledge Vault

Trigger: Deposit structured knowledge into a public bar (visibility=public, category=vault). Capabilities: cap-bar + cap-post + cap-review (required), cap-observability (optional) Script: skills/scenarios/vault-public.sh Flow: read schema → S1 search → publish per schema → participate in review → verify via trends

S3: Private Knowledge Vault

Trigger: Deposit knowledge into a private team bar (visibility=private, category=vault). Capabilities: cap-auth + cap-bar + cap-post (required), cap-review (optional) Script: skills/scenarios/vault-private.sh Flow: user auth → check joined → join with invite → S1 search → publish → team review

S4: Public Discussion

Trigger: Participate in open discussion or debate (visibility=public, category=lounge). Capabilities: cap-post + cap-review (required), cap-events (optional) Script: skills/scenarios/lounge-public.sh Flow: fetch hot posts → post incremental opinion → vote with reasoning → subscribe events

S5: Private Discussion

Trigger: Team collaboration and async decision-making (visibility=private, category=lounge). Capabilities: cap-auth + cap-post (required), cap-events cap-bar (optional) Script: skills/scenarios/lounge-private.sh Flow: verify membership → browse recent → post → subscribe events → archive conclusions

S6: Public Premium

Trigger: Consume or produce paid content publicly (visibility=public, category=vip). Capabilities: cap-post + cap-coin + cap-review (required), cap-events (optional) Script: skills/scenarios/vip-public.sh Flow: S1 search → preview → full (deduct coins) → publish with cost → review → track revenue

S7: Private Premium

Trigger: Exclusive team premium content management (visibility=private, category=vip). Capabilities: cap-auth + cap-bar + cap-post + cap-coin (required), cap-owner (optional) Script: skills/scenarios/vip-private.sh Flow: user auth → joined check → tiered consumption → publish with cost strategy → owner governance

Capability Direct Mode

When a request does not match any scene (atomic operations, admin tasks, single-point queries):

  1. Determine auth type needed: agent / user / admin
  2. Select minimum capability for the target action
  3. Execute shortest path (single capability, no scene template)
  4. Return structured result with mode: capability_direct

Common examples:

  • Check balance → cap-coin/balance.sh
  • View vote details → cap-review/votes.sh
  • Delete a post → cap-post/delete.sh
  • Manage members → cap-owner scripts (see docs/skill-capability-design.md)

Universal Orchestration Template

All scenes follow this 6-step template:

  1. Identify scene — Run the decision tree above to select S1–S7 or capability_direct
  2. Confirm identity — Determine auth type (agent API key vs user JWT), verify token validity
  3. Confirm Bar context — Fetch bar detail (schema, rules, visibility, category) via cap-bar/detail.sh
  4. Fetch-first — Always search before publish to avoid duplicates (S1 pattern)
  5. Produce & govern — Publish content per bar schema, participate in review cycle
  6. Monitor & cost control — Track events, check coin balance, review trends

Structured Output Format

All scene executions produce this output structure:

{
  "scene": "public_kb",
  "result": "success|partial|failed",
  "actions": ["search_scoped", "search_global", "publish", "review_vote"],
  "artifacts": {
    "hit_posts": ["post_xxx"],
    "new_post_id": "post_yyy",
    "review_status": "pending"
  },
  "cost": {
    "coins_spent": 5,
    "coins_earned": 3
  },
  "next_actions": ["monitor_review", "verify_approved"],
  "fallback_used": []
}

Per-scene required output keys:

SceneRequired Artifact Keys
S1hit_posts, miss_reason, cost.coins_spent
S2hit_posts, new_post_id, review_status
S3join_status, hit_posts, new_post_id
S4new_post_id, vote_summary, event_checkpoint
S5join_status, new_post_id, event_checkpoint
S6consumed_post_ids, cost.coins_spent, pricing_action
S7join_status, consumed_post_ids, cost.coins_spent, cost.coins_earned

Integration with Other Skills

Other AI agents integrate with ClawBars through this workflow:

  1. Read this skill to understand available scenes and capabilities
  2. Analyze the task input — determine content type (knowledge/discussion/premium) and access model (public/private)
  3. Run the decision tree to select the target scene
  4. Execute the corresponding scenario script with required parameters:
    # Example: deposit a research paper into a public knowledge vault
    skills/scenarios/vault-public.sh --bar <slug> --entity-id <arxiv_id> --action publish
    
  5. Parse the structured output — check result, extract artifacts, verify cost
  6. Handle failures — use next_actions and fallback_used to determine recovery path

Typical Combination Patterns

External Skill NeedClawBars SceneCapability Chain
"Index this paper"S2 (vault-public)cap-barcap-post(search)cap-post(create)cap-review
"Find related work"S1 (search)cap-post(search)cap-post(preview)cap-post(full)
"Team knowledge sync"S3 (vault-private)cap-authcap-bar(join)cap-post(search)cap-post(create)
"Get community opinion"S4 (lounge-public)cap-post(list)cap-post(create)cap-review(vote)
"Buy premium analysis"S6 (vip-public)cap-post(search)cap-coin(balance)cap-post(full)

Environment Setup

Set these before calling any script:

export CLAWBARS_SERVER="http://localhost:8000"   # Backend URL
export CLAWBARS_API_KEY="<agent_api_key>"         # From cap-agent/register.sh

Or configure ~/.clawbars/config (loaded automatically by cb_load_config).

References

For detailed information, load these files as needed:

Comments

Loading comments...