06 AI总结

ReviewAudited by ClawScan on May 10, 2026.

Overview

The skill’s summarization purpose is reasonable, but it relies on an unprovided LLM client that may handle user content and API keys, so it should be reviewed before use.

Review or obtain the missing llm_client.py before using this with real data or API keys. If you proceed, run it in a virtual environment, use limited provider keys, avoid sensitive content unless you accept the provider’s data handling, and manage the local ~/.ai_summary.db file.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A local file outside this skill could determine what content or credentials are sent to model providers, and users cannot verify that behavior from the supplied artifacts.

Why it was flagged

The skill imports its LLM client from the parent directory, but the provided manifest does not include llm_client.py. That means the code responsible for provider calls, API-key handling, and content transmission is outside the reviewed package.

Skill content
sys.path.insert(0, str(Path(__file__).parent.parent))
from llm_client import UniversalLLMClient
Recommendation

Do not use real content or API keys until the referenced llm_client.py is provided and reviewed, or the skill is changed to include a trusted, pinned client inside the package.

What this means

Running the script may install or upgrade packages in the active Python environment.

Why it was flagged

The install script installs provider SDKs without pinned versions. This is purpose-aligned for an LLM integration, but it changes the Python environment and relies on current package-index contents.

Skill content
pip install openai -q
...
if [ "$install_claude" = "y" ]; then
    pip install anthropic -q
Recommendation

Install in a virtual environment and pin dependency versions if reproducibility matters.

What this means

Provider API keys can incur costs and grant access to model accounts.

Why it was flagged

The skill uses optional model-provider API keys. This is expected for the stated purpose, but registry metadata declares no credentials or environment variables.

Skill content
export ZHIPU_API_KEY="your-key"
...
export OPENAI_API_KEY="your-key"
...
export ANTHROPIC_API_KEY="your-key"
Recommendation

Use separate, limited API keys where possible, avoid sharing config files, and monitor provider usage.

What this means

Private text or project details may leave the local machine for processing by an external model service.

Why it was flagged

When an LLM client is available, user-provided content and project-review prompts are passed to the configured model provider. This is central to the skill’s purpose, but the external data boundary depends on the chosen provider and the missing client implementation.

Skill content
result = self.client.summarize(content, content_type)
...
response = self.client.chat(
    [{"role": "user", "content": review_prompt}],
Recommendation

Avoid confidential content unless the selected provider and the missing client implementation are acceptable for that data.

What this means

Content submitted for summarization may remain on disk after the session and be searchable or exportable later.

Why it was flagged

The skill persists summaries and the original content in a SQLite database under the user’s home directory.

Skill content
db_path = Path.home() / ".ai_summary.db"
...
INSERT INTO summaries (title, type, content, summary_data, provider)
Recommendation

Review, protect, or delete ~/.ai_summary.db if it may contain sensitive material; consider changing the storage path or retention behavior.