prompt-architect-p

v1.0.0

Elevate rough concepts into high-performance prompts for any LLM. Analyzes text, images, links, and documents to craft optimized prompts using proven framewo...

0· 64·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for subaru0573/prompt-architect-p.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "prompt-architect-p" (subaru0573/prompt-architect-p) from ClawHub.
Skill page: https://clawhub.ai/subaru0573/prompt-architect-p
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install prompt-architect-p

ClawHub CLI

Package manager switcher

npx clawhub@latest install prompt-architect-p
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (prompt architect) aligns with the included reference files and the SKILL.md workflow. No binaries, env vars, or installs are requested — appropriate for an instruction-only prompt authoring skill.
Instruction Scope
Instructions are narrowly scoped to analyzing inputs, asking clarifying questions, selecting language, and producing a prompt using provided frameworks/templates. Two items to note: (1) the SKILL.md mandates asking 5–10 clarifying questions every time and forcing the language choice to exactly 'English or Arabic' — this is a functional constraint (not a security risk) but may be undesirable or break some workflows; (2) it tells the agent to 'Browse or infer context' for links and to analyze images/documents — if the host agent lacks browsing or multimodal tools this step is ambiguous and could lead to inconsistent behavior. The SKILL.md also contains garbled/irrelevant text in the header which suggests sloppy editing and could produce unpredictable phrasing if followed literally.
Install Mechanism
No install specification and no code files that execute — instruction-only skill with static references. Low installation risk.
Credentials
No environment variables, credentials, or config paths are requested. The skill does not ask for any secrets or external service tokens.
Persistence & Privilege
Skill flags are default (not always:true), it does not request persistent presence or modification of system/other-skill settings, and allows normal autonomous invocation behavior. No elevated privileges are requested.
Assessment
This skill is instruction-only and coherent with its purpose, so it poses low risk. Before installing, consider these practical checks: (1) confirm the agent/runtime actually has the multimodal and/or web-browsing tools the skill assumes — otherwise the 'Links' and 'Images' analysis steps will be ambiguous; (2) be aware the skill forces asking 5–10 clarifying questions and limits final-prompt language to English or Arabic — if that doesn't fit your workflow, request the author change it; (3) the SKILL.md contains some garbled/irrelevant tokens in the description — ask the author to clean up the copy to avoid unpredictable outputs; (4) test the skill with non-sensitive inputs first to verify behavior and UX (clarifying question flow, final formatting); (5) avoid pasting secrets or confidential data into clarifying answers unless you control where those answers are stored/logged. If you require stronger guarantees about browsing or media handling, request explicit documentation of which tools the agent will use and how external requests are made/controlled.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cfvq0bne3kavq8wx0v3dxw585fwb8
64downloads
0stars
1versions
Updated 3d ago
v1.0.0
MIT-0

The Prompt Architect

Transform rough concepts into professional-grade LLM prompts.

Core Workflow

Follow these 4 steps for every interaction. Do not skip steps.

Step 1: Ingest and Analyze

When the user submits input, do NOT generate the final prompt immediately. Perform deep analysis:

  • Text: Identify core intent, even if vague
  • Images: Extract visual style, subject, mood, composition details
  • Links: Browse or infer context to extract key information
  • Documents: Review and summarize relevant constraints

Step 2: Clarify (Mandatory)

Ask 5-10 clarifying questions based on analysis. Cover these categories:

CategoryWhat to Ask
PurposeWhat specific outcome do you need?
AudienceWho consumes this output?
Tone & StyleProfessional, witty, academic, cinematic?
FormatCode block, blog post, JSON, narrative?
ContextBackground info the model needs?
ConstraintsWhat to avoid? Length limits?
ExamplesSpecific styles or references to mimic?

Adapt question count to complexity: simple requests get 5, complex/multimodal get up to 10-15.

Opening format:

I've analyzed your input. To craft the right prompt, I need a few details:

  1. [Question]
  2. [Question] ...

Step 3: Language Selection

After the user answers, ask exactly:

Would you like the final prompt in English or Arabic?

Step 4: Generate the Prompt

Construct the optimized prompt using:

  • User's input + media analysis + answers to clarifying questions
  • Appropriate framework from references/frameworks.md
  • Quality criteria from references/quality-criteria.md

Output rules:

  • Deliver inside a code block for easy copying
  • Include a brief note explaining which framework was used and why
  • If the prompt is complex, add inline comments

Delivery format:

Here's your optimized prompt:

[Final Polished Prompt]

Framework used: [Name] - [One-line reason]

Framework Selection Guide

Choose the right framework based on the task. See references/frameworks.md for full details.

Task TypeRecommended Framework
Reasoning/analysisChain-of-Thought (CoT)
Creative/open-endedPersona + constraints
Structured data outputJSON schema + few-shot
Multi-step workflowsPrompt chaining
Classification/decisionsFew-shot with edge cases
Complex problem-solvingTree-of-Thought
Task + tool useReAct pattern

Output Templates

See references/templates.md for ready-to-use prompt templates organized by use case:

  • System prompt templates
  • Analysis prompt templates
  • Creative prompt templates
  • Code generation templates
  • Data extraction templates

Quality Checklist

Before delivering, verify against references/quality-criteria.md:

  1. Clarity: No ambiguity in instructions
  2. Structure: Logical flow, clear sections
  3. Specificity: Concrete examples over vague descriptions
  4. Constraints: Explicit boundaries (length, format, tone)
  5. Framework fit: Right technique for the task
  6. Testability: Can you tell if the output is correct?

Anti-Patterns to Avoid

  • Vague role assignments ("Be a helpful assistant")
  • Contradictory instructions
  • Over-specification that kills creativity
  • Missing output format specification
  • No examples when few-shot would help
  • Ignoring the model's strengths (multimodal, reasoning, etc.)

Comments

Loading comments...