06 AI总结
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill’s summarization purpose is reasonable, but it relies on an unprovided LLM client that may handle user content and API keys, so it should be reviewed before use.
Review or obtain the missing llm_client.py before using this with real data or API keys. If you proceed, run it in a virtual environment, use limited provider keys, avoid sensitive content unless you accept the provider’s data handling, and manage the local ~/.ai_summary.db file.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A local file outside this skill could determine what content or credentials are sent to model providers, and users cannot verify that behavior from the supplied artifacts.
The skill imports its LLM client from the parent directory, but the provided manifest does not include llm_client.py. That means the code responsible for provider calls, API-key handling, and content transmission is outside the reviewed package.
sys.path.insert(0, str(Path(__file__).parent.parent)) from llm_client import UniversalLLMClient
Do not use real content or API keys until the referenced llm_client.py is provided and reviewed, or the skill is changed to include a trusted, pinned client inside the package.
Running the script may install or upgrade packages in the active Python environment.
The install script installs provider SDKs without pinned versions. This is purpose-aligned for an LLM integration, but it changes the Python environment and relies on current package-index contents.
pip install openai -q
...
if [ "$install_claude" = "y" ]; then
pip install anthropic -qInstall in a virtual environment and pin dependency versions if reproducibility matters.
Provider API keys can incur costs and grant access to model accounts.
The skill uses optional model-provider API keys. This is expected for the stated purpose, but registry metadata declares no credentials or environment variables.
export ZHIPU_API_KEY="your-key" ... export OPENAI_API_KEY="your-key" ... export ANTHROPIC_API_KEY="your-key"
Use separate, limited API keys where possible, avoid sharing config files, and monitor provider usage.
Private text or project details may leave the local machine for processing by an external model service.
When an LLM client is available, user-provided content and project-review prompts are passed to the configured model provider. This is central to the skill’s purpose, but the external data boundary depends on the chosen provider and the missing client implementation.
result = self.client.summarize(content, content_type)
...
response = self.client.chat(
[{"role": "user", "content": review_prompt}],Avoid confidential content unless the selected provider and the missing client implementation are acceptable for that data.
Content submitted for summarization may remain on disk after the session and be searchable or exportable later.
The skill persists summaries and the original content in a SQLite database under the user’s home directory.
db_path = Path.home() / ".ai_summary.db" ... INSERT INTO summaries (title, type, content, summary_data, provider)
Review, protect, or delete ~/.ai_summary.db if it may contain sensitive material; consider changing the storage path or retention behavior.
