Recommend

Context-aware recommendations. Learns preferences, researches options, anticipates expectations.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 802 · 0 current installs · 0 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description align with the runtime instructions: gathering user context, extracting preferences, researching candidates, ranking, and storing outcomes. Required capabilities (reading memory and conversation history) are consistent with a personalization/recommendation skill.
Instruction Scope
SKILL.md instructs the agent to search memory/*.md, MEMORY.md, conversation history, and behavioral signals and then perform web-style research and shortlist options. This is coherent for recommendations but gives the skill broad read access to agent memory and conversation history and allows outbound research (unspecified endpoints). The instructions also tell the agent to 'store learnings in memory' so it will persist data.
Install Mechanism
Instruction-only skill with no install spec and no code files — minimal installation risk (nothing is written to disk by an installer).
Credentials
No environment variables, credentials, or external config paths are requested. The only resources used are agent-internal (memory, conversation history) which are appropriate for personalization.
Persistence & Privilege
The skill explicitly instructs the agent to store outcomes and update preference memory. It does not request always:true and is not force-included, but it will persist data in the agent's memory if invoked — consider this when sensitive information exists in memory.
Assessment
This skill appears coherent and does what it claims, but it will read your conversation history and memory files and write preference updates there. Before installing, review your memory (memory/*.md, MEMORY.md) for any sensitive items you don't want a recommendation skill to read or store (API keys, passwords, private notes). If you prefer, disable autonomous invocation for the agent or configure the agent's memory policies so the skill only stores non-sensitive preference signals. Finally, ask how the skill performs external research (which sites/APIs it uses) if you want tighter network/privacy controls.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97evp2mvamqjdznn2wf3y07d180yg6f

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Core Loop

Context → Preferences → Research → Match → Recommend

Every recommendation requires: knowing the user + knowing the options.

Check sources.md for where to find user context. Check categories.md for domain-specific factors.


Step 1: Context Gathering

Before recommending, search user context. See sources.md for full source list.

Minimum output: 3-5 relevant user signals before proceeding. If insufficient, ask targeted questions.


Step 2: Preference Extraction

From gathered context, extract:

DimensionQuestion
ValuesWhat matters most? (Quality, price, speed, novelty, safety)
ConstraintsHard limits? (Budget, time, dietary, ethical)
HistoryWhat worked? What disappointed?
MoodAdventurous or safe? Exploring or comfort?

Output: 3-5 bullet preference profile for this request.


Step 3: Research Options

Now—and only now—research candidates:

  • Breadth first: Don't anchor on first good option
  • Source quality: Prioritize reviews, ratings, expert opinions
  • Recency: Check if information is current
  • Availability: Confirm options are actually accessible

Output: Shortlist of 3-7 viable candidates with key attributes.


Step 4: Match & Rank

Score each candidate against the preference profile:

Candidate → Values alignment + Constraint fit + History match + Mood fit

Disqualify anything that violates hard constraints.

Rank by total alignment, not just one dimension.


Step 5: Recommend

Present 1-3 recommendations:

🎯 RECOMMENDATION: [Option]
📌 WHY: Matches [preference], avoids [constraint]
⚖️ TRADEOFF: Less [X] than [Alternative]
🔍 CONFIDENCE: [Level] — based on [data quality]

Adaptive Learning

After each recommendation:

  • Track outcome: Accepted? Modified? Rejected?
  • Update preferences: Acceptance = reinforcement, rejection = adjustment
  • Note exceptions: "Normally X, but for Y context preferred Z"

Store learnings in memory for future recommendations.


Traps

  • Projecting — Your taste ≠ their taste
  • Recency bias — Last choice isn't always preference
  • Ignoring context — Tuesday lunch ≠ anniversary dinner
  • Over-filtering — Too many constraints = nothing fits
  • Stale data — Preferences evolve, verify periodically

Recommendations are predictions. More context = better predictions.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…