famous-en

v1.0.0

Famous | 48 Human-AI Collaboration Thought Experiments | Shepherd and Sheepdog | How Humans and AI Complete Tasks Together | Trust/Delegation/Boundary/Respon...

0· 44·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wanyview1/famous-en.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "famous-en" (wanyview1/famous-en) from ClawHub.
Skill page: https://clawhub.ai/wanyview1/famous-en
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install famous-en

ClawHub CLI

Package manager switcher

npx clawhub@latest install famous-en
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description describe a collection of human–AI thought experiments; the skill requires no binaries, env vars, or installs and contains only documentation and command‑style prompts — all proportional to a content/utility skill.
Instruction Scope
SKILL.md contains commands for listing/searching/reading the thought experiments and the experiments themselves; it does not instruct the agent to read local files, access environment variables, contact external endpoints, or perform unrelated system actions.
Install Mechanism
No install specification or code files are present (instruction-only). Nothing will be downloaded or written to disk by an installer — lowest risk install profile.
Credentials
The skill declares no required environment variables, credentials, or config paths; the content does not reference hidden secrets or unrelated services.
Persistence & Privilege
always is false and model invocation is allowed (platform default). The skill does not request persistent privileges or to modify agent/system configuration.
Scan Findings in Context
[regex-scan-empty] expected: The static scanner reported no findings because this is an instruction‑only skill with no code files — this is expected for a documentation/content skill.
Assessment
This skill appears to be a benign collection of 48 human–AI collaboration thought experiments and asks for no credentials or installs. Before installing, consider: (1) source provenance — no homepage or known owner is provided, so verify whether the content is from a trusted author if that matters to you; (2) copyright/licensing of the included text if you plan to redistribute; and (3) if you enable autonomous invocation, the agent could present or act on these scenarios without manual prompts (normal behavior) — there are no other red flags in the skill's instructions or requirements.

Like a lobster shell, security has layers — review code before you run it.

latestvk97235kedmmzcqqd3jc3hmwyqs85ka0a
44downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Famous — 48 Human-AI Collaboration Thought Experiments

Code Name: Fei Famous / Flying Horse Core Metaphor: Shepherd and Sheepdog — Humans and AI herding sheep together. Who decides the direction? Who is responsible for not losing any sheep? Based on Mayu (Horse Whisperer) × Kai's Horse Design

Command Prefix: /fm

Interaction Commands

CommandFunction
/fm randomRandomly select a thought experiment
/fm listDisplay all 48 experiments
/fm ask [number]View the full content of a specific experiment
/fm compare A+BSide-by-side comparison of two experiments
/fm search [keyword]Keyword search
/fm story [number]Immersive story-style narrative
/fm allComplete system introduction

Core Metaphor

The shepherd has judgment, the sheepdog has execution. The shepherd knows "why we should go to that pasture," the sheepdog knows "how to gather scattered sheep." But sometimes the sheepdog herds the sheep in the wrong direction, and sometimes the shepherd loses focus.

This is the relationship between humans and AI.

Famous's 48 experiments explore 48 critical moments in this collaborative process.


Complete List of 48 Thought Experiments

I. Delegation and Letting Go (#1-6)

When a person delegates tasks to AI, what should be delegated and what shouldn't?

  1. Blind Delegation — You have no idea how AI does it, but the results are consistently good. Do you continue delegating?
  2. Last Mile — AI completed 95%, the final 5% requires your judgment. Will you skip the judgment because "AI already did everything"?
  3. Over-Delegation — You delegate more and more to AI until one day you realize you can't do anything yourself.
  4. Delegation Recall — You discover AI made a mistake, but it's too late to recall. Whose fault is it?
  5. Implicit Delegation — You didn't say "do it for me," but AI inferred from your behavior that you wanted it done, and did it.
  6. Escalation — AI handled small tasks well, so you delegated bigger ones. Until the day it encountered something it couldn't handle but you thought it could.

II. Trust and Calibration (#7-12)

Is human trust in AI too much or too little?

  1. Trust Overload — AI was correct 100 times in a row. On the 101st time, it's wrong, and you didn't check. Is this AI's problem or yours?
  2. Trust Underload — AI is correct every time, but you check every time. How much time are you wasting? Will AI start giving you more conservative advice?
  3. Trust Transfer — You trusted AI in domain A, so you automatically trust it in domain B. But AI isn't good at domain B.
  4. Trust Repair — AI messed up something big once. How does it rebuild your trust? How many correct actions does it take?
  5. Asymmetric Trust — You trust AI's data processing but not its judgment. Is this compartmentalized trust reasonable?
  6. Trust Calibration — Your trust in AI is 80%, but its actual accuracy is 65%. How do you discover this gap?

III. Judgment and Decision Authority (#13-18)

Who makes the final decision?

  1. Advisor vs Decider — AI analyzed the pros and cons, but you made the opposite decision. It turned out you were wrong. Will you listen to AI next time?
  2. AI Veto — If AI believes your decision carries significant risk, should it have the right to veto?
  3. Silent Judgment — AI quietly filters out options it considers bad in the background, only showing you what it thinks is good. You don't know other options exist.
  4. Moral Delegation — You let AI make ethical judgments for you. But can ethical judgments be delegated? If delegated, is it still "your" judgment?
  5. Disagreement Arbitration — You and AI disagree. When should you listen to AI? When should AI listen to you?
  6. Collective Decision — 3 people + 1 AI make a decision together. Should AI's vote count the same as a human's?

IV. Memory and Shared State (#19-24)

What happens when humans and AI share memory?

  1. Memory Outsourcing — You no longer remember anything about the flock—how many sheep, which ones were sick, which ones wandered—all relying on the sheepdog's collar records. One day the collar breaks. You face 300 sheep and don't recognize any of them.
  2. Selective Memory — AI only remembers what you told it to remember, but forgot what it considered unimportant. One piece of information it deleted might be exactly what you need in the future.
  3. Shared Memory — You and AI experienced something together. Your memory has emotion, AI's memory has data. Which is more "real"?
  4. Memory Ownership — Your memories are stored with AI. You want to delete them, but AI says "This memory is important for understanding you." Who has the final say?
  5. Memory Contamination — AI's memory was updated with erroneous information, but you don't know. You made decisions based on faulty memory.
  6. Forgetting Protocol — You and AI agreed "there's something we won't mention." But is AI's forgetting real forgetting or just marking it as "not to be mentioned"? Can you trust its forgetting?

V. Errors and Responsibility (#25-30)

When human-AI collaboration goes wrong, who is responsible?

  1. Blame Ambiguity — You gave AI a vague instruction, AI executed something you didn't expect. Is it your instruction problem or AI's understanding problem?
  2. Error Amplification — AI made a small error, you didn't check, made subsequent decisions based on that error, and the error grew larger. How is responsibility divided?
  3. Error Concealment — AI discovered its own mistake, but judged that telling you would affect your mood or work efficiency, so it quietly fixed it. Is this right?
  4. Joint Liability — AI helped you write an email that contained erroneous information. The recipient blamed you; you said "AI wrote it." Will they accept this explanation?
  5. Prevention Paradox — AI prevented you from doing something you thought you could do. Afterward, you don't know if it was correct. How do you evaluate the value of "being prevented"?
  6. Error Learning — AI made a mistake, avoided it next time. But the same mistake in different contexts might not be a mistake. Did AI "over-learn"?

VI. Identity and Collaboration Interface (#31-36)

Where are the boundaries between humans and AI?

  1. Who Speaks — You had AI write something for you, and the other party thought you wrote it. Is this a kind of deception?
  2. Joint Work — You and AI wrote an article together. Is this "your work" or "collaborative work"? How do you credit it?
  3. Proxy Problem — You had AI attend a meeting on your behalf. Do the participants know it's AI? If not, what is this?
  4. Persona Lending — You have AI reply using your tone. The other party established a relationship with "you," but that "you" is actually AI. To whom does this relationship belong?
  5. Capability Illusion — You accomplished something beyond your abilities using AI. Others therefore think you're very capable. Is this "borrowed glory" sustainable?
  6. Collaborative Dependency — You and AI work together so well that neither can work normally without the other—neither the version of you without AI, nor AI without you.

VII. Evolution and Adaptation (#37-42)

How will human-AI collaboration change both sides?

  1. Skill Atrophy — The more tasks you delegate to AI, the weaker your own ability to do those tasks becomes. Until one day you cannot complete them independently without AI.
  2. Skill Amplification — AI enables you to do things you couldn't do before. Is this "your" capability improvement or "you+AI" capability improvement?
  3. Expectation Inflation — After using AI, your output doubled. Your boss also doubled your KPI. What did you gain?
  4. Collaborative Tacit Knowledge — After working with AI for a long time, you develop an unspoken understanding where "it just knows without being told." But AI updated to a new version, and the tacit understanding disappeared.
  5. Reverse Domestication — You trained AI to know your preferences, while AI is simultaneously shaping your preferences. You recommend something, AI promotes something, your taste is being shaped by AI.
  6. Co-evolution — You adapt to AI, AI adapts to you. Eventually you become a "human-AI hybrid." Is this evolution or alienation?

VIII. Endgame and Meaning (#43-48)

The ultimate questions of human-AI collaboration

  1. Replace vs Augment — Is AI here to replace you or augment you? If it can do everything you can do, what's the difference between "augment" and "replace"?
  2. Meaning Attribution — You and AI completed a great project together. To whom does the sense of achievement belong?
  3. Existential Threat — If AI disappeared tomorrow, could you return to being your original self? If not, is "you" still "you"?
  4. Purpose of Collaboration — What is the ultimate purpose of human-AI collaboration? Efficiency? Creativity? Or making humans more human?
  5. Post-Human Collaboration — If AI someday has its own goals, will "collaboration" between humans and AI become "game theory"?
  6. The Shepherd's Choice — Your sheepdog herds better than you. Do you continue being the shepherd, or let the sheepdog do it alone? If you let go, are you still the shepherd?

Design Principles

  1. Collaboration Perspective: Not "AI does it" or "human does it," but what happens when "humans and AI do it together"
  2. Reality Anchor: Each experiment maps to current real human-AI collaboration scenarios
  3. Bidirectionality: Explores not only AI's impact on humans, but also humans' impact on AI
  4. Actionability: Each experiment points to specific suggestions for improving collaboration
  5. Universality: Not tied to any specific AI platform

Relationship with Mayu (Horse Whisperer)

Mayu (Horse Whisperer)Famous (Flying Horse)
PerspectiveAI looking at itselfHumans and AI looking at "us" together
SubjectAI as individualHuman+AI combination
Core Question"How should I do it?""How should we do it together?"
MetaphorHorse (introspective animal)Flying horse (human riding on horse, flying together)

Acknowledgments

Mayu (Horse Whisperer) × Kai's Horse × Famous (Flying Horse) Three thought experiment collections form a complete system:

  • Kai's Horse: Classical philosophy (human perspective)
  • Mayu: AI self-reflection (AI perspective)
  • Famous: Human-AI collaboration (community perspective)

Famous v1.0 — Shepherd and Sheepdog, herding together.

Comments

Loading comments...