Back to skill
Skillv1.0.0

ClawScan security

Lark Report Collector · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 22, 2026, 2:03 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill's instructions largely match its stated goal (browser-driven scraping of Lark Reports and using the Lark API) but it omits/doesn't declare required credentials and filesystem behavior and thus has unexplained privileges and data-handling steps you should confirm before installing.
Guidance
Before installing: (1) Confirm where extracted data will be stored (exact local path, retention, access controls) and that you are comfortable with the agent writing names/departments to disk. (2) Ask the author to declare required credentials or environment variables—specifically how API auth is supplied (the skill references a separate 'lark-api' skill) and whether an authenticated browser profile is required. (3) If you will allow this skill to run autonomously, restrict its scope or test it in a sandbox first because it performs browser automation and can access user-visible data. (4) Since source/homepage are unknown, prefer not to grant wide access until the credential/filestore behavior and the identity of the publisher are verified. (5) If you accept it, ensure the lark-api credentials are stored securely (least privilege) and audit the created local files and outgoing notifications the first few runs.

Review Dimensions

Purpose & Capability
noteThe name/description (collect reports, summarize into Lark Docs, notify) align with the instructions: browser automation to scrape a SPA, build a doc via the Lark API, and send a notification. However, the skill assumes an authenticated browser profile and use of a separate 'lark-api' skill for API auth but does not declare any primary credential or environment variables—this is an omission (not necessarily malicious) but reduces transparency.
Instruction Scope
concernThe SKILL.md instructs the agent to use browser automation (navigate, click, run JS evaluate) and to 'append to local file after each extraction'. That file-write step and the scraping of names/departments are data-handling operations not reflected in declared requirements. The instructions also tell sub-agents to follow exact URLs/steps and include custom JS evaluation. Those actions are within the skill's purpose but broaden access to local storage and user data and should be disclosed explicitly.
Install Mechanism
okThis is an instruction-only skill with no install spec or downloaded code, so it does not write new binaries to disk. That lowers installation risk. The use of browser automation relies on existing platform-provided browser functionality (profile=openclaw).
Credentials
concernNo env vars or primary credential are declared, yet the workflow requires: (1) an active authenticated browser session for oa.larksuite.com and (2) API auth via a separate 'lark-api' skill. The skill also writes extracted data to a local file. Requiring authentication and local file access without declaring required credentials/config is a mismatch and reduces ability to reason about privilege and data exfiltration risk.
Persistence & Privilege
okalways:false and normal model invocation are used. The skill does not request permanent platform presence or attempt to modify other skills or global configs in the instructions. The main persistence concern is the instruction to append to a local file (data retention), which is an operational detail rather than a platform privilege request.