Skills
PassAudited by ClawScan on May 2, 2026.
Overview
InkOS appears to be a disclosed novel-writing assistant, with normal but important dependencies on an npm package, LLM API credentials, provider calls, and local manuscript memory.
Before installing, verify the @actalk/inkos npm package and repository, configure API keys via environment variables, use only trusted LLM endpoints, monitor token costs, and remember that local story state and memory files may retain private manuscript content.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A single writing command may trigger multiple LLM calls and automatic draft revisions.
The skill intentionally chains several model-agent steps and automatic revision once a chapter workflow is invoked. This is disclosed and central to the writing purpose, but it can consume API quota and change local draft/review artifacts without per-step confirmation.
orchestrates a multi-agent pipeline ... to generate, audit, and revise novel content with zero human intervention per chapter
Start with small chapter counts, review generated chapters before approval/export, and monitor provider token usage and costs.
Installing the skill means trusting the external npm package implementation.
The runnable implementation is installed from an external npm package, and the provided scan context includes no package source files. This is expected for a CLI package install, but users are relying on npm/GitHub provenance outside the reviewed SKILL.md.
node | package: @actalk/inkos
Verify the npm package and linked GitHub repository before installing, and prefer pinned or reviewed versions in sensitive environments.
The configured API key may incur provider charges and should be treated as a sensitive credential.
The skill requires an LLM provider API key. This is expected for an LLM-powered writing CLI, and the instructions advise using an environment variable rather than putting the key directly in commands.
"requires": { "bins": ["inkos", "node"], "env": ["OPENAI_API_KEY"] } ... Prefer --api-key-env so the key never appears in shell historyUse a scoped provider key where possible, keep it in environment variables or a secret manager, and monitor account usage.
Project memory may contain sensitive creative material and may shape later output.
The skill persists manuscript state and memory for later retrieval. This is appropriate for long-form continuity, but private drafts, worldbuilding, and reference material may remain on disk and affect future generations.
Truth files are persisted as schema-validated JSON (`story/state/*.json`) ... SQLite temporal memory database (`story/memory.db`) enables relevance-based retrieval
Keep project directories private, review or delete story/state and story/memory.db before sharing a project, and avoid putting secrets into briefs or manuscripts.
Drafts, briefs, and context may be sent to the selected LLM provider or proxy.
The skill supports external and custom LLM provider endpoints. The artifact explicitly warns users to use trusted endpoints, which makes this disclosed and purpose-aligned, but manuscript prompts and credentials depend on the configured provider.
Configure your LLM provider (OpenAI, Anthropic, or any OpenAI-compatible API) ... point ONLY to trusted endpoints
Use reputable providers or trusted proxies, review their data-retention policies, and avoid sending confidential material to untrusted endpoints.
