Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Improve Codebase Architecture

v1.0.0

Explore a codebase to find opportunities for architectural improvement, focusing on making the codebase more testable by deepening shallow modules. Use when...

0· 79·0 current·0 all-time
byEmerson Braun@emersonbraun

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for emersonbraun/eb-improve-codebase-architecture.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Improve Codebase Architecture" (emersonbraun/eb-improve-codebase-architecture) from ClawHub.
Skill page: https://clawhub.ai/emersonbraun/eb-improve-codebase-architecture
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install eb-improve-codebase-architecture

ClawHub CLI

Package manager switcher

npx clawhub@latest install eb-improve-codebase-architecture
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's stated purpose is to explore a repo and propose refactors, which legitimately may read repository files and propose changes. However Step 7 requires creating GitHub issues via `gh issue create` and explicitly instructs not to ask the user before creating them. The skill declares no required env vars or credentials (e.g., GITHUB_TOKEN), creating an incoherence between claimed requirements and the actions it instructs.
!
Instruction Scope
SKILL.md directs the agent to explore the entire codebase (reasonable for the purpose) and to spawn parallel sub-agents to design interfaces (expected). But it also mandates creating a GitHub issue automatically and forbids asking the user to review before creation — this expands scope from read/analysis to write/side-effect without explicit consent. That automatic-write behavior is not justified or declared in the skill metadata.
Install Mechanism
Instruction-only skill with no install spec and no code files; nothing is written to disk by the skill itself. This is the lowest-risk install mechanism.
!
Credentials
The skill declares no required environment variables or credentials but instructs use of `gh issue create`, which requires authenticated GitHub credentials or the gh CLI configured with an account. Requiring no credentials despite telling the agent to perform authenticated writes is disproportionate and inconsistent. The skill also asks for autonomous creation of issues without asking for explicit user confirmation.
Persistence & Privilege
The skill does not request always:true and does not declare elevated platform privileges. However it instructs autonomous destructive/write behavior (creating GitHub issues) and to avoid asking the user for review — combined with normal autonomous invocation this increases potential for unintended side effects. Consider limiting autonomy or requiring explicit user confirmation for any writes.
What to consider before installing
This skill's analysis and refactor-design steps are coherent with its goal, but it contains an undeclared automatic-write action: it tells the agent to run `gh issue create` and explicitly not to ask the user before creating the issue. Before installing or enabling this skill, consider the following: - Expectation mismatch: The skill didn't declare any required credentials, but creating GitHub issues needs an authenticated `gh` session or a GITHUB_TOKEN. Ask the author to declare required env vars (e.g., GITHUB_TOKEN) and to document the exact repo permissions needed. - Consent and safety: Change the workflow so the skill prepares the RFC content and asks you to approve creating the issue rather than creating it automatically. If you must allow automatic creation, restrict the agent to a test repo or require an explicit opt-in for each run. - Least privilege: Provide a deploy token with only repo:issues scope (or equivalent) instead of a full account token, and prefer ephemeral tokens you can revoke after the run. - Auditability: Ensure the agent logs the issue body it will create and the target repo before any write operation, and keep an audit trail. - If you cannot confirm these mitigations, treat the skill as read-only: run its analysis locally or in a sandbox and refuse to grant write access. Additional info that would change this assessment: if the skill metadata explicitly required and documented a limited-scope GITHUB_TOKEN, and the SKILL.md required explicit user confirmation before any `gh issue create` call, the concerns would be substantially reduced.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dvw6f01ffmtm5a2nywm97c184c8h3
79downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Improve Codebase Architecture

Explore a codebase like an AI would, surface architectural friction, discover opportunities for improving testability, and propose module-deepening refactors as GitHub issue RFCs.

A deep module (John Ousterhout, "A Philosophy of Software Design") has a small interface hiding a large implementation. Deep modules are more testable, more AI-navigable, and let you test at the boundary instead of inside.

Process

1. Explore the codebase

Use the Agent tool with subagent_type=Explore to navigate the codebase naturally. Do NOT follow rigid heuristics — explore organically and note where you experience friction:

  • Where does understanding one concept require bouncing between many small files?
  • Where are modules so shallow that the interface is nearly as complex as the implementation?
  • Where have pure functions been extracted just for testability, but the real bugs hide in how they're called?
  • Where do tightly-coupled modules create integration risk in the seams between them?
  • Which parts of the codebase are untested, or hard to test?

The friction you encounter IS the signal.

2. Present candidates

Present a numbered list of deepening opportunities. For each candidate, show:

  • Cluster: Which modules/concepts are involved
  • Why they're coupled: Shared types, call patterns, co-ownership of a concept
  • Dependency category: See REFERENCE.md for the four categories
  • Test impact: What existing tests would be replaced by boundary tests

Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?"

3. User picks a candidate

4. Frame the problem space

Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:

  • The constraints any new interface would need to satisfy
  • The dependencies it would need to rely on
  • A rough illustrative code sketch to make the constraints concrete — this is not a proposal, just a way to ground the constraints

Show this to the user, then immediately proceed to Step 5. The user reads and thinks about the problem while the sub-agents work in parallel.

5. Design multiple interfaces

Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a radically different interface for the deepened module.

Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category, what's being hidden). This brief is independent of the user-facing explanation in Step 4. Give each agent a different design constraint:

  • Agent 1: "Minimize the interface — aim for 1-3 entry points max"
  • Agent 2: "Maximize flexibility — support many use cases and extension"
  • Agent 3: "Optimize for the most common caller — make the default case trivial"
  • Agent 4 (if applicable): "Design around the ports & adapters pattern for cross-boundary dependencies"

Each sub-agent outputs:

  1. Interface signature (types, methods, params)
  2. Usage example showing how callers use it
  3. What complexity it hides internally
  4. Dependency strategy (how deps are handled — see REFERENCE.md)
  5. Trade-offs

Present designs sequentially, then compare them in prose.

After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not just a menu.

6. User picks an interface (or accepts recommendation)

7. Create GitHub issue

Create a refactor RFC as a GitHub issue using gh issue create. Use the template in REFERENCE.md. Do NOT ask the user to review before creating — just create it and share the URL.

Comments

Loading comments...