Install
openclaw skills install pullstar-1on1Generate a ready-to-use 1-on-1 brief for any engineer on your team — from their GitHub activity, in seconds. Spots patterns like high output but low review participation, large PR sizes suggesting batching, and cross-repo collaboration signals.
openclaw skills install pullstar-1on1PullStar fetches GitHub activity for one engineer (PRs authored, reviews given), runs a deterministic local scoring engine across five dimensions, and prepares an LLM input payload. An external agent (your configured AI provider) generates the final structured brief.
Data Flow Summary:
| Step | Location | Data Sent |
|---|---|---|
| Ingest | Local | GitHub API only |
| Score | Local | No external calls |
| Prepare | Local | No external calls |
| Agent inference | External | LLM input payload sent to your AI provider |
| Finalize | Local | No external calls |
⚠️ Important: The final brief generation step sends data to your configured AI provider. All other steps run locally on your machine.
pip install PyGithub python-dotenvThis skill requires a GitHub token to read repository activity. You have two options:
Option A: Fine-grained PAT (Recommended)
Option B: Classic PAT (Broader access)
repo (full read access to private repos)GITHUB_ORG to limit scope to one organization.| Practice | Why |
|---|---|
| Use a dedicated token | Don't reuse personal high-privilege tokens |
Set GITHUB_ORG | Narrows search to one org instead of all accessible repos |
Store in .env or ~/.pullstar/credentials | Never commit tokens to git |
| Revoke when done | Limit exposure window |
| Use fine-grained PAT when possible | Least-privilege access |
Default Mode (no --pr_insights):
PR Insights Mode (--pr_insights):
Recommendation: Review .pullstar/llm_input_{login}.json before running agent inference if you have privacy concerns.
.env.env contains secrets only. Never commit it.
| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN | Yes | GitHub PAT (fine-grained or classic) |
GITHUB_ORG | No | Scope ingestion to one org. Omit to search all accessible repos. |
Secrets are resolved using layered lookup:
.env)~/.pullstar/credentials (central credentials file).env (project-local, final fallback)# 1. Ingest GitHub activity
python scripts/ingest.py --login jsmith
# 2. Score the profile (local, deterministic)
python scripts/score.py --login jsmith
# 3. Prepare the LLM input artifact (local, no AI call)
python scripts/agent_prepare_1on1.py --login jsmith
# 4. External agent reads .pullstar/llm_input_jsmith.json
# and writes .pullstar/llm_output_jsmith.json with schema:
# { "version": "1.0", "engineer_login": "jsmith", "brief": "## Quick Summary\n..." }
# 5. Finalize — merge agent output into final artifact
python scripts/agent_finalize_1on1.py --login jsmith
| File | Written by | Contains | Sent to AI? |
|---|---|---|---|
ingest_{login}.json | ingest.py | Raw GitHub activity, PR details | ❌ No |
score_{login}.json | score.py | Dimension scores, signals, flags | ❌ No |
llm_input_{login}.json | agent_prepare_1on1.py | LLM prompt payload | ✅ Yes |
llm_output_{login}.json | External agent | Generated brief | ❌ No |
output_{login}.json | agent_finalize_1on1.py | Final brief + profile | ❌ No |
All artifacts written to .pullstar/ (gitignored).
python scripts/ingest.py --login jsmith --pr_insights
What it does:
Bounds (safety limits):
⚠️ Security Warning:
llm_input_{login}.json before agent inferenceWhen to use: Only when you need deeper collaboration insights and have reviewed the privacy implications.
File: .pullstar/llm_input_{login}.json
Contains:
system: System prompt with instructionsuser: User message with engineer datametadata: Version, timestamps, scoresFile: .pullstar/llm_output_{login}.json
Required schema:
{
"version": "1.0",
"engineer_login": "jsmith",
"brief": "## Quick Summary\n..."
}
Requirements:
brief must be non-empty markdown string"GitHub rejected the PR search query (422)"
"GitHub rate limit hit"
Slow ingestion on high-activity users
--max-results 20 to cap search resultsMIT — See source repository for full license text.