Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Bid Proposal Manager

v1.0.0

입찰/사업/연구 공모 안내문을 파싱하여 벡터화하고, 제출서류 검증 및 관련 정보를 자동 추출하여 Notion 프로젝트 페이지를 생성하는 스킬. PDF, HWP, HWPX, DOCX, 웹페이지 형식의 공고문을 지원하며, PostgreSQL + pgvector로 시맨틱 검색 가능.

0· 59·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, provided scripts (document_parser.py, proposal_analyzer.py, vectorizer.py, notion_builder.py), DB schema, and templates all align with a bid-proposal management tool. Required binaries (python3, psql) are reasonable for the claimed functionality. However, several capabilities (PostgreSQL + pgvector, Notion API, optional OpenAI/Ollama embedding providers) imply credentials and environment variables that the skill's declared requirements do not list — an inconsistency worth noting.
!
Instruction Scope
SKILL.md and platform prompts instruct running local Python scripts that will: parse uploaded files/URLs, connect to PostgreSQL + pgvector, call embedding providers (local sentence-transformers or remote OpenAI/Ollama), and call the Notion API. The instructions (and platform files) explicitly reference environment variables (DATABASE_URL, NOTION_API_KEY, EMBEDDING_PROVIDER, OPENAI_API_KEY, OLLAMA_BASE_URL) and tell the agent to execute DB and network operations. Those runtime actions require sensitive credentials and network access but the skill's declared requires.env is empty — the instructions therefore access sensitive configuration not reflected in the declared manifest.
Install Mechanism
There is no automatic install spec (instruction-only with included scripts). That reduces supply-chain risk because nothing is downloaded/executed automatically by the installer. The setup_guide suggests pip installing many packages, but those are not fetched by the skill at install time. No suspicious external download URLs or extract/install steps are present.
!
Credentials
The code and docs require/expect credentials and config: DATABASE_URL (psql/pgvector), NOTION_API_KEY (Notion integration), EMBEDDING_PROVIDER and optionally OPENAI_API_KEY or OLLAMA_BASE_URL. Despite that, the skill's declared required env vars list is empty and no primaryEnv is set. Requesting full DB credentials and a Notion token is proportionate to the claimed functionality only if limited to a dedicated DB user and a least-privilege Notion integration; the manifest should explicitly declare these credentials and justify them. Lack of declaration is an incoherence and increases the risk of accidental credential sharing.
Persistence & Privilege
The skill does not request always:true and will not be force-included. It writes to a database and can create Notion pages (expected for its purpose). It does not appear to modify other skills or system-wide agent settings. Autonomous invocation is allowed (default) but not combined with other high-privilege flags here.
What to consider before installing
What to check before installing/using this skill: - Manifest mismatch: The skill's code and documentation clearly require credentials (DATABASE_URL / psql access, NOTION_API_KEY, and optionally OPENAI_API_KEY or OLLAMA_BASE_URL), but the SKILL metadata declares no required environment variables. Treat this as a red flag: ask the publisher to update the manifest to list required env vars and explain why each is needed. - Limit credentials and scope: If you test/run it, create a dedicated PostgreSQL user and database with minimal privileges (not a superuser). Do not supply production DB credentials or admin accounts. For Notion, create an integration and grant it access only to the specific database/page you want the skill to manage. - Review network endpoints: The included scripts use requests for web fetch and Notion; ensure the Notion API key is only used for api.notion.com and that embedding provider endpoints (OpenAI/Ollama) are the expected official endpoints. If you must supply an OpenAI key, remember that data sent to OpenAI may leave your environment — avoid sending sensitive documents. - Run in an isolated environment: Try the skill in a sandboxed VM or container. Install dependencies manually following the setup_guide and inspect runtime behavior (network connections, DB writes) before providing real credentials or real data. - Check data retention and raw text storage: The DB schema stores raw_text and embeddings. Decide whether storing full document text in your DB is acceptable and ensure DB access is secured. - Ask for fixes/clarifications: Request that the skill author update SKILL.md/manifest to list required env vars (DATABASE_URL, NOTION_API_KEY, EMBEDDING_PROVIDER, optional OPENAI_API_KEY/OLLAMA_BASE_URL), and document exactly which network calls and external services will be used. Without these manifest updates the skill's declared requirements are inconsistent with its actual behavior. - If unsure, get a code review: If you cannot verify these points yourself, have a trusted engineer audit the code for any hidden exfil endpoints or unexpected file/system access before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk977m3cf63xzfbzbad87sy120s84b3h8

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📋 Clawdis
Binspython3, psql

Comments