Skills

PassAudited by ClawScan on May 2, 2026.

Overview

InkOS appears to be a disclosed novel-writing assistant, with normal but important dependencies on an npm package, LLM API credentials, provider calls, and local manuscript memory.

Before installing, verify the @actalk/inkos npm package and repository, configure API keys via environment variables, use only trusted LLM endpoints, monitor token costs, and remember that local story state and memory files may retain private manuscript content.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A single writing command may trigger multiple LLM calls and automatic draft revisions.

Why it was flagged

The skill intentionally chains several model-agent steps and automatic revision once a chapter workflow is invoked. This is disclosed and central to the writing purpose, but it can consume API quota and change local draft/review artifacts without per-step confirmation.

Skill content
orchestrates a multi-agent pipeline ... to generate, audit, and revise novel content with zero human intervention per chapter
Recommendation

Start with small chapter counts, review generated chapters before approval/export, and monitor provider token usage and costs.

What this means

Installing the skill means trusting the external npm package implementation.

Why it was flagged

The runnable implementation is installed from an external npm package, and the provided scan context includes no package source files. This is expected for a CLI package install, but users are relying on npm/GitHub provenance outside the reviewed SKILL.md.

Skill content
node | package: @actalk/inkos
Recommendation

Verify the npm package and linked GitHub repository before installing, and prefer pinned or reviewed versions in sensitive environments.

What this means

The configured API key may incur provider charges and should be treated as a sensitive credential.

Why it was flagged

The skill requires an LLM provider API key. This is expected for an LLM-powered writing CLI, and the instructions advise using an environment variable rather than putting the key directly in commands.

Skill content
"requires": { "bins": ["inkos", "node"], "env": ["OPENAI_API_KEY"] } ... Prefer --api-key-env so the key never appears in shell history
Recommendation

Use a scoped provider key where possible, keep it in environment variables or a secret manager, and monitor account usage.

What this means

Project memory may contain sensitive creative material and may shape later output.

Why it was flagged

The skill persists manuscript state and memory for later retrieval. This is appropriate for long-form continuity, but private drafts, worldbuilding, and reference material may remain on disk and affect future generations.

Skill content
Truth files are persisted as schema-validated JSON (`story/state/*.json`) ... SQLite temporal memory database (`story/memory.db`) enables relevance-based retrieval
Recommendation

Keep project directories private, review or delete story/state and story/memory.db before sharing a project, and avoid putting secrets into briefs or manuscripts.

What this means

Drafts, briefs, and context may be sent to the selected LLM provider or proxy.

Why it was flagged

The skill supports external and custom LLM provider endpoints. The artifact explicitly warns users to use trusted endpoints, which makes this disclosed and purpose-aligned, but manuscript prompts and credentials depend on the configured provider.

Skill content
Configure your LLM provider (OpenAI, Anthropic, or any OpenAI-compatible API) ... point ONLY to trusted endpoints
Recommendation

Use reputable providers or trusted proxies, review their data-retention policies, and avoid sending confidential material to untrusted endpoints.