Back to skill
Skillv0.1.0
ClawScan security
SQL Dreamer · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 28, 2026, 5:50 PM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's code and runtime instructions generally match its stated purpose (SQL-backed pre-feed, archive, cleanup) but there are several incoherent or missing declarations — notably environment/credential requirements and packaging/install expectations — that warrant caution before installing.
- Guidance
- This skill appears to implement what it claims (SQL pre-feed, archiver, cleanup), but there are inconsistencies you should address before installing: - Metadata vs reality: The registry lists no required env vars, yet SKILL.md and the scripts require database credentials (SQL_PASSWORD/.env) and optionally a CONFLUENCE_API_TOKEN. Treat that as a packaging/documentation bug and verify required secrets before installing. - Least privilege: Provide the skill with the minimum DB privileges it needs (ideally separate read-only account for pre-feed and a constrained write account for archiving). Don't reuse admin or shared credentials. - Audit the code: Inspect src/sql_connector.py, scripts/pre_dream_sql_feed.py and scripts/post_dream_archiver.py to confirm which SQL queries and DELETE statements will run (migrate.py creates tables; archiver deletes old files/rows). If you have a staging DB, run the scripts there first. - Backup before running migrations or pruning: sql/migrate.py will create schemas/tables; post_dream_archiver/pruning will delete files older than N days — back up data and test with dry-run flags where available. - Confluence publishing: The confluence publisher posts to whatever domain and credentials you supply. Only enable it with a dedicated API token and review the markdown→storage conversion to ensure sensitive content isn't accidentally published. - Test in an isolated environment: Use a local or mocked SQL Server (or the included tests/mock_dreamer) to run pytest and the scripts with --dry-run to observe behavior without affecting production data. If you want, I can: (a) list the exact SQL statements used by the archiver and pre-feed scripts, (b) summarize places where credentials are read, or (c) produce a short checklist for safe deployment (accounts, backups, dry-run steps).
Review Dimensions
- Purpose & Capability
- noteThe name/description (SQL-backed pre-feed and post-dream archiver) aligns with the included scripts and DB schema. However the registry metadata claims no required environment variables/credentials while the SKILL.md and code clearly expect SQL credentials (SQL_PASSWORD / .env) and an optional CONFLUENCE token. The README also declares a dependency on a separate sql-connector skill, yet this package includes a src/sql_connector.py file, which is an inconsistency worth auditing.
- Instruction Scope
- concernRuntime instructions and scripts read and write workspace files (memory/YYYY-MM-DD.md, memory/.dreams/short-term-recall.json, memory/dreaming/*), walk up the filesystem to find config/config.yml, and perform SQL reads/writes and deletions (e.g., clearing DreamLight for a cycle_date). These behaviors are expected for the purpose but the SKILL.md/README instruct storing DB credentials in ~/.openclaw/workspace/.env and exporting SQL_PASSWORD/CONFLUENCE_API_TOKEN. The skill also contains a confluence publisher that will POST content to a user-provided Confluence domain. The instructions give the agent broad discretion to access workspace files and DB content — verify that only intended data is queried and archived.
- Install Mechanism
- noteThere is no formal install spec in the registry (no automated download/extract), which reduces pipeline risk, but the package contains many Python scripts, a requirements.txt (pyodbc, PyYAML), and migration scripts that will be run by the user. That mismatch (no install step declared but code included) is a packaging/documentation inconsistency to be mindful of. No remote, opaque download URLs are present in the manifest.
- Credentials
- concernRegistry metadata lists no required env vars, yet SKILL.md/README and code expect SQL credentials (SQL_PASSWORD or .env keys such as SQL_CLOUD_PASSWORD/SQL_LOCAL_PASSWORD) and optionally CONFLUENCE_API_TOKEN. Asking for database credentials is appropriate for a DB-backed skill, but the package fails to declare these requirements in metadata — this omission impairs safe review and least-privilege decisions. Consider using a limited, read-only DB account for the pre-dream feed and a separate write account for archiving, and avoid giving global DB admin credentials.
- Persistence & Privilege
- okThe skill is not force-enabled (always: false) and uses normal autonomous invocation (disable-model-invocation: false). It doesn't request to modify other skills or global agent settings. It does perform persistent writes to your SQL database and remove old files per config, which is within the declared scope.
