InkOS - Autonomous Novel Writing Agent

PassAudited by ClawScan on May 13, 2026.

Overview

InkOS appears to be a coherent novel-writing tool, but users should notice that it installs an external npm CLI, uses LLM API keys/providers, can run multi-agent writing loops, and stores local story memory.

This looks acceptable for a novel-writing workflow if you trust the npm package and chosen model provider. Use a dedicated API key with limits, choose only trusted custom endpoints, keep private manuscripts in protected project folders, and run small batches until you understand the cost and local files created.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill requires trusting the npm package and its updates, not just the visible SKILL.md instructions.

Why it was flagged

The runtime behavior depends on an external npm package, while the provided artifact set contains only SKILL.md and no package source code.

Skill content
node | package: @actalk/inkos
Recommendation

Install only from the expected package source, review the package/homepage if possible, and pin or verify versions for important projects.

What this means

The configured API key can incur provider usage costs and grants access to the selected LLM account or project.

Why it was flagged

The tool requires an LLM provider credential and correctly encourages using an environment variable rather than putting the key directly in command history.

Skill content
export OPENAI_API_KEY=sk-xxx
inkos config set-global --provider openai --base-url https://api.openai.com/v1 --api-key-env OPENAI_API_KEY --model gpt-4o
Recommendation

Use a dedicated, least-privileged API key with spending limits, and avoid sharing project folders or logs that might reveal configuration.

What this means

Creative briefs, imported chapters, style references, and story context may be processed by the configured model provider.

Why it was flagged

The skill can send writing context to user-configured external model endpoints, including custom/proxy providers; the documentation warns users to trust those endpoints.

Skill content
custom OpenAI-compatible provider support ... For compatible/proxy endpoints, use --provider custom and point ONLY to trusted endpoints
Recommendation

Use only trusted model endpoints and avoid sending confidential manuscripts or private source material unless the provider’s data handling terms are acceptable.

What this means

Imported text, worldbuilding notes, and generated story facts can remain in project files and influence later chapters.

Why it was flagged

The skill keeps persistent local story state and retrieval memory so future writing can reuse prior context.

Skill content
Truth files are persisted as schema-validated JSON (`story/state/*.json`) ... SQLite temporal memory database (`story/memory.db`) enables relevance-based retrieval
Recommendation

Keep projects in trusted directories, review or delete memory/state files when importing sensitive material, and avoid mixing unrelated private content into a writing project.

What this means

Large batch runs can consume API credits and make many local project changes before the user reviews the output.

Why it was flagged

The tool intentionally automates multi-agent drafting, auditing, and revision, which is central to its purpose but may run multiple model calls per chapter.

Skill content
generate, audit, and revise novel content with zero human intervention per chapter ... Self-correction loop runs until all critical issues clear
Recommendation

Start with small chapter counts, monitor token/cost usage, and review chapters before using approve-all or exporting.