Smalltalk
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill mostly matches its Smalltalk development purpose, but it can run persistent daemons and automatically patch a live image, so it should be reviewed before installation.
Install only if you want an agent to interact with and potentially modify a live Smalltalk image. Prefer playground mode for experiments, back up dev images before use, review or opt out of automatic hotfixing, pin the external setup repository to a trusted commit, stop the daemon when finished, and provide LLM API keys only when you are comfortable sending the relevant code to that provider.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Starting the daemon may modify the MCP server code in the user's live image; in dev mode, related changes can be recorded in the user's persistent .changes file.
At daemon startup, the skill automatically sends Smalltalk code into the live image to redefine MCPServer behavior when it detects an old version, instead of only executing user-requested code.
if version < 2:
print(" ⚠️ Old image detected, applying hotfixes...")
self._hotfix_define_method()
...
fix_code = (
"MCPServer compileSilently: 'toolDefineMethod: args ... class compileSilently: source classified: category.Document the hotfix clearly, show the exact patch, require explicit opt-in before applying it to user images, and create a backup or restrict automatic patching to ephemeral playground images.
The agent can run code and alter or delete classes/methods in the connected Smalltalk image.
The skill intentionally exposes arbitrary Smalltalk evaluation and image mutation commands. This is aligned with the development purpose, but it is high-impact authority.
| `evaluate <code>` | Execute Smalltalk code, return result | | `define-class <definition>` | Create or modify a class | | `delete-class <class>` | Remove a class |
Use playground mode for experiments, use dev mode only with backups, and require clear user confirmation before destructive or persistent image changes.
A Smalltalk VM may keep running after the immediate task, and dev-mode work can persist across sessions.
The skill documents a background daemon that keeps running and a development mode where changes persist across sessions.
nohup python3 smalltalk-daemon.py start > /tmp/daemon.log 2>&1 & ... Dev Mode User supplies their own image/changes pair. Changes persist across sessions.
Check daemon status, stop it when finished, and use playground mode when persistence is not desired.
The behavior of the required MCP server/setup may change if the external repository changes.
Setup depends on cloning an external repository without a pinned commit or integrity check. The clone is user-directed, not automatic.
git clone https://github.com/CorporateSmalltalkConsultingLtd/ClaudeSmalltalk.git
Inspect the repository, pin to a trusted commit, and avoid running setup code from an unreviewed revision.
If configured, the skill can make billable API calls using the user's Anthropic or OpenAI credentials.
The skill can use provider API keys for optional LLM-backed commands. This is expected for those features, but users should notice the credential use.
| `ANTHROPIC_API_KEY` | API key for Anthropic Claude (preferred for LLM tools) | | `OPENAI_API_KEY` | API key for OpenAI (fallback for LLM tools) |
Provide API keys only when using explain/audit features, prefer restricted or project-specific keys where possible, and avoid exposing keys in shared environments.
Private or proprietary Smalltalk source may be transmitted to Anthropic or OpenAI for analysis.
The explain/audit features involve LLM providers and method source, so source code may be sent outside the local machine when those commands are used.
| `explain <code>` | Explain Smalltalk code (requires `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`) | | `explain-method <class> <sel> ...` | Fetch method from image and explain it |
Use these features only with code you are allowed to share with the chosen provider, and prefer local/non-LLM workflows for sensitive projects.
