Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Gstack Pro

v1.0.0

Transform your AI assistant into a structured virtual software engineering team with 10 specialist roles — inspired by Garry Tan's GStack (YC CEO, 16K GitHub...

2· 265·0 current·0 all-time
bymingyuan@zmy1006-sudo

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zmy1006-sudo/gstack-pro.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Gstack Pro" (zmy1006-sudo/gstack-pro) from ClawHub.
Skill page: https://clawhub.ai/zmy1006-sudo/gstack-pro
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install gstack-pro

ClawHub CLI

Package manager switcher

npx clawhub@latest install gstack-pro
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (a multi-role AI engineering team) aligns with the instructions: architecture, code review, QA, shipping and retros. The requested actions (git, gh, npm, browser-driven QA, spawning subagents) are appropriate for that purpose. However, the skill does not declare that it requires access to repository credentials, GitHub CLI auth, or browser login credentials — which are logically needed for the described shipping/PR and browser QA flows.
Instruction Scope
SKILL.md instructs the agent to run repository and release commands (git fetch/rebase/push, npm version, gh pr create), run git logs, and drive a browser tool (open, click, screenshot, read console). Those operations are in-scope for a release/QA skill, but they give the agent the ability to modify code, create PRs, and interact with arbitrary web pages — so the instructions grant significant operational power and should be paired with explicit gating/approval steps.
Install Mechanism
Instruction-only skill (no install spec, no code). This is lowest install risk: nothing is downloaded or written by the skill package itself.
!
Credentials
The skill declares no required environment variables or credentials but expects to use gh/git push and browser automation that will require authentication and possibly secrets (GitHub tokens, SSH keys, app credentials, site logins). The absence of declared creds/primaryEnv is a mismatch; consumers need to be aware the agent will implicitly require repository and browser credentials to perform /ship and /qa actions.
Persistence & Privilege
always is false and autonomous invocation is allowed (platform default). The skill's instructions include destructive operations (git push --force-with-lease, git revert, npm version) and automated shipping — these are coherent with the stated function but increase risk if the agent or its subagents are granted broad repo/CI credentials or if runs are allowed without manual approval.
What to consider before installing
This skill appears to implement a realistic, powerful release/QA process, but it implicitly requires credentials and runtime privileges that it does not declare. Before installing or enabling it: 1) Expect it to need Git/GitHub authentication (SSH key or GH token), browser login credentials, and access to the workspace/repo — do not provide production-wide tokens. 2) Prefer least-privilege tokens (scoped GitHub tokens, deploy-only keys) and test in a sandbox repository or staging environment first. 3) Require manual approval or a confirm step before running /ship or any operation that pushes, rebases, or reverts commits. 4) Audit subagent permissions and the platform’s browser tooling — automated browsers can access arbitrary URLs and capture data. 5) If you want to use QA features, ensure credentials are stored securely and consider using temporary or read-only credentials for testing. These steps reduce the chance the agent will inadvertently push or exfiltrate sensitive data.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ey1qykce3rqxxyha8aqjfsh83xfnc
265downloads
2stars
1versions
Updated 4w ago
v1.0.0
MIT-0

GStack Pro — 10-Role AI Engineering Team

Built on the philosophy of Garry Tan's GStack (YC CEO) · 16K GitHub Stars · MIT License
Adapted for OpenClaw subagent + session architecture


What It Does

GStack Pro gives your AI 10 specialist roles — each with a clear mandate, a structured output format, and a measurable quality bar.

Instead of one generic AI doing everything badly, you get a team:

#RoleIconSubagentBest For
1CEO / Product Thinker🏛️requirerRethink the problem before building
2Architect / Tech Lead🏗️architectLock in data flow, failure modes, tests
3Designer Review🎨designer80-item design audit, AI slop detection
4Paranoid Code Review🔍testerN+1, race conditions, trust boundaries
5Browser QA🌐browser toolAI with eyes — login, click, screenshot, verify
6Automated QA + Fix🧪tester + coderFind → fix → re-verify with Health Score
7QA Reporter📊testerReport-only, clean handoff to team
8One-Command Ship🚀operatorsync → test → push → PR
9Engineering Retro🔄progressCommit analysis, praise, growth areas
10Release Docs📝writerSync docs to match what shipped

The Development Cycle

User Request
     ↓
① CEO审视 (/plan-ceo)
   → Is this worth building? What's the 10-star product?
     ↓
② Architecture Lock (/plan-eng)
   → Data flow, state machine, failure modes, test matrix
     ↓
③ Design Review (/plan-design)
   → 80-item audit, design quality grades, AI slop detection
     ↓
④ Paranoid Code Review (/review)
   → N+1, race conditions, trust boundary violations
     ↓
⑤ Automated Browser QA (/qa)
   → AI drives browser, finds bugs, fixes them, re-verifies
   → Health Score 0-100 determines ship-readiness
     ↓
⑥ One-Command Ship (/ship)
   → sync main → run tests → push → open PR
     ↓
⑦ Engineering Retro (/retro)
   → Commit analysis, team performance, improvement plan
     ↓
⑧ Release Docs (/document)
   → Update README/ARCHITECTURE to match what shipped

How to Activate a Role

Method 1: Direct Command (e.g., in conversation)

/plan-ceo: 为AICFO设计一个新功能:员工工资条自动生成PDF

/review: 审查deepfmt Sprint 3的代码改动

/qa: 对 https://xxx.space.minimaxi.com 运行标准QA测试

Method 2: Subagent (for background/parallel work)

sessions_spawn({
  agentId: "tester",  // QA + Review
  task: "Read skills/gstack-pro/roles/review.md then review the code at /workspace/projects/aicfo/aicfo-mvp/src/api/"
})

Health Score System

After every /qa session, output a structured score:

{
  "healthScore": 85,
  "status": "🟡 Good",
  "breakdown": {
    "functional": { "passed": 8, "total": 10, "score": 24 },
    "edgeCases": { "covered": 4, "total": 5, "score": 20 },
    "consoleErrors": { "passed": true, "score": 25 },
    "designRegressions": { "passed": true, "score": 16 }
  },
  "shipRecommendation": "🟡 Fix 2 minor issues before ship"
}
ScoreStatusAction
90-100🟢 ExcellentReady to ship immediately
70-89🟡 Good2-3 minor issues, fix before ship
50-69🟠 Needs WorkSignificant bugs, fix before next sprint
<50🔴 Do Not ShipCore functionality broken, redo required

Quality Bars

Code Must Pass

  • ✅ N+1 queries eliminated
  • ✅ All external calls have timeouts
  • ✅ Retries with exponential backoff
  • ✅ Database transactions properly bounded
  • ✅ Input validation on all untrusted data
  • ✅ No trust boundary violations
  • ✅ Structured logging (JSON, with trace IDs)

Design Must Pass

  • ✅ Consistent visual hierarchy
  • ✅ No AI slop patterns (copy-paste generic cards, overuse of gradients)
  • ✅ Responsive at 375px / 768px / 1440px
  • ✅ Accessible (color contrast, focus states)
  • ✅ Meaningful empty states

Anti-Patterns Detected

PatternWhy It FailsDetection
"Looks good!"AI self-evaluation biasEvaluator never reads generator code
Circular dependencyUnmaintainable architectureDependency graph analysis
AI slopGeneric, low-quality design80-item designer audit
Magic numbersHard to maintainno-magic-numbers lint rule
Forgotten edge casesSilent production failuresMandatory test matrix
No rollback planCan't safely deploy/ship requires rollback plan

OpenClaw Subagent Mapping

RoleSubagent IDType
CEO Productrequirerdemand analysis
Architectarchitecttech design
DesignerdesignerUI/UX review
Code Reviewtesterquality assurance
Browser QAbrowser toolautomated testing
QA + Fixtester + codertest + implement
QA Reportertesterreporting
Shipoperatorrelease
Retroprogressanalysis
Docswriterdocumentation

Key Insight: Generator vs Evaluator

GStack Pro separates creation from judgment.

Generator Agent  ──→  builds code  ──→  Evaluator Agent
  (creates)         (artifact)           (judges from SPEC + URL only)
                                              ↑
                                       Never reads generator's code

This eliminates cognitive commitment bias — the AI can't judge what it already committed to building.

Inspired by: Anthropic Engineering, "Harness Design for Long-Running Application Development" (2026)


Files

FilePurpose
SKILL.mdThis file — overview and usage
references/plan-ceo.mdCEO product thinking SOP
references/plan-eng.mdArchitecture review SOP
references/review.mdParanoid code review SOP
references/qa.mdAutomated QA SOP + Health Score
references/ship.mdOne-command ship SOP
references/retro.mdEngineering retro SOP

Inspired by Garry Tan's GStack (https://gstacks.org) · MIT License For OpenClaw · Compatible with Claude Code GStack workflows

Comments

Loading comments...