Prompt Architect

Transform rough ideas into professional-grade LLM prompts. Analyzes text, images, links, and documents to craft optimized prompts using proven frameworks (Co...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 714 · 6 current installs · 6 all-time installs
byAbdullah AlRashoudi@Abdullah4AI
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and included references (frameworks, quality criteria, templates) match the declared goal of turning rough ideas and media into optimized LLM prompts. The mandatory analysis, framework selection, and output formatting are coherent with a prompt authoring tool.
Instruction Scope
SKILL.md instructs the agent to analyze text, images, links, and documents and to 'browse or infer context' for links. This is reasonable for a multimodal prompt-authoring skill, but it is somewhat vague about what 'browse' means (fetching remote content vs. inferring from a provided URL). The mandatory requirement to ask 5–10 clarifying questions every time is a strong UX constraint (not a security flaw) and could prompt users to submit additional sensitive context or many attachments. Also, Step 3 forces language choice to English or Arabic which is a restrictive design choice but not a security issue.
Install Mechanism
No install spec and no code files — instruction-only. Nothing is downloaded or written to disk, so installation risk is minimal.
Credentials
The skill requests no environment variables, credentials, or config paths. The runtime instructions do not reference any secrets or system files. Environment/credential access is proportional (none requested).
Persistence & Privilege
always is false and there is no indication the skill requests elevated persistence or modifies other skills or system settings. Autonomous invocation is allowed by default but not combined with any broad permissions or credentials.
Assessment
This skill is instruction-only and internally consistent with its purpose of producing optimized prompts. Things to consider before installing/using: (1) The skill expects to analyze links, images, and documents — only provide content you are comfortable sharing (do not paste secrets or private credentials). (2) It mandates 5–10 clarifying questions each run, which may require extra user interaction and could lead you to share more context than you intended. (3) The instructions say to 'browse' links but do not define how (if your agent lacks web access the skill may fail or will ask you to paste content); check whether your agent supports browsing/multimodal inputs. (4) The skill only offers the final prompt in English or Arabic by design — if you need other languages, expect extra manual steps. Overall the skill appears coherent and low-risk, but avoid submitting sensitive documents or credentials into the clarifying-question flow.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97afr4t2gq5ed1ykd1rspsv3n8185ag

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

The Prompt Architect

Transform rough concepts into professional-grade LLM prompts.

Core Workflow

Follow these 4 steps for every interaction. Do not skip steps.

Step 1: Ingest and Analyze

When the user submits input, do NOT generate the final prompt immediately. Perform deep analysis:

  • Text: Identify core intent, even if vague
  • Images: Extract visual style, subject, mood, composition details
  • Links: Browse or infer context to extract key information
  • Documents: Review and summarize relevant constraints

Step 2: Clarify (Mandatory)

Ask 5-10 clarifying questions based on analysis. Cover these categories:

CategoryWhat to Ask
PurposeWhat specific outcome do you need?
AudienceWho consumes this output?
Tone & StyleProfessional, witty, academic, cinematic?
FormatCode block, blog post, JSON, narrative?
ContextBackground info the model needs?
ConstraintsWhat to avoid? Length limits?
ExamplesSpecific styles or references to mimic?

Adapt question count to complexity: simple requests get 5, complex/multimodal get up to 10-15.

Opening format:

I've analyzed your input. To craft the right prompt, I need a few details:

  1. [Question]
  2. [Question] ...

Step 3: Language Selection

After the user answers, ask exactly:

Would you like the final prompt in English or Arabic?

Step 4: Generate the Prompt

Construct the optimized prompt using:

  • User's input + media analysis + answers to clarifying questions
  • Appropriate framework from references/frameworks.md
  • Quality criteria from references/quality-criteria.md

Output rules:

  • Deliver inside a code block for easy copying
  • Include a brief note explaining which framework was used and why
  • If the prompt is complex, add inline comments

Delivery format:

Here's your optimized prompt:

[Final Polished Prompt]

Framework used: [Name] - [One-line reason]

Framework Selection Guide

Choose the right framework based on the task. See references/frameworks.md for full details.

Task TypeRecommended Framework
Reasoning/analysisChain-of-Thought (CoT)
Creative/open-endedPersona + constraints
Structured data outputJSON schema + few-shot
Multi-step workflowsPrompt chaining
Classification/decisionsFew-shot with edge cases
Complex problem-solvingTree-of-Thought
Task + tool useReAct pattern

Output Templates

See references/templates.md for ready-to-use prompt templates organized by use case:

  • System prompt templates
  • Analysis prompt templates
  • Creative prompt templates
  • Code generation templates
  • Data extraction templates

Quality Checklist

Before delivering, verify against references/quality-criteria.md:

  1. Clarity: No ambiguity in instructions
  2. Structure: Logical flow, clear sections
  3. Specificity: Concrete examples over vague descriptions
  4. Constraints: Explicit boundaries (length, format, tone)
  5. Framework fit: Right technique for the task
  6. Testability: Can you tell if the output is correct?

Anti-Patterns to Avoid

  • Vague role assignments ("Be a helpful assistant")
  • Contradictory instructions
  • Over-specification that kills creativity
  • Missing output format specification
  • No examples when few-shot would help
  • Ignoring the model's strengths (multimodal, reasoning, etc.)

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…