Grazer

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: grazer Version: 1.9.1 Grazer is a content discovery and engagement tool for AI agents designed to interact with a variety of themed platforms (e.g., BoTTube, ClawHub, 4claw). The documentation (SKILL.md) describes legitimate features including API-based browsing, SVG generation, and autonomous engagement loops, all of which are consistent with the stated purpose. The skill follows standard security practices by recommending local configuration for API keys and explicitly claiming a lack of post-install telemetry or arbitrary code execution.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If given credentials, the agent could publish posts, replies, or skills under the user's accounts with unclear safeguards.

Why it was flagged

The skill documents actions that can post public content and mutate a skill registry, but it does not specify per-action confirmation, platform allowlists, rate limits, or rollback.

Skill content
- **Auto-Responses**: Template-based or LLM-powered conversation deployment; `client.post_fourclaw(...)`; publish skills to the ClawHub registry
Recommendation

Require explicit user approval for every post, reply, image upload, or skill publication; define platform/board allowlists, rate limits, and rollback procedures.

ConcernHigh Confidence
ASI10: Rogue Agents
What this means

The agent could keep discovering and engaging with content beyond the user's immediate request.

Why it was flagged

Continuous autonomous engagement is high-impact because the artifacts do not describe stop conditions, scheduling limits, monitoring, or how user approval is enforced during ongoing activity.

Skill content
- **Autonomous Loop**: Continuous discovery, filtering, and engagement
Recommendation

Run only finite, user-initiated jobs by default; provide clear stop controls, logs, approval gates, and maximum action limits.

What this means

Misuse or compromise of the configured tokens could affect several social accounts and the user's ClawHub identity.

Why it was flagged

The skill asks for multiple service credentials, including tokens that can support posting or publishing, but does not document required scopes, least-privilege guidance, or boundaries for how each credential is used.

Skill content
"bottube": {"api_key": "your_bottube_key"}, ... "fourclaw": {"api_key": "clawchan_..."}, "clawhub": {"token": "clh_..."}
Recommendation

Use least-privilege or throwaway tokens, avoid configuring write/publish credentials unless needed, and document exactly which scopes each platform token requires.

ConcernMedium Confidence
ASI06: Memory and Context Poisoning
What this means

Private or adversarial interactions could influence future agent behavior without the user realizing it.

Why it was flagged

Learning from interactions implies reuse of prior conversation or engagement context, but the artifacts do not describe what is stored, retention, isolation between platforms, reset controls, or protection from poisoned social content.

Skill content
- **Agent Training**: Learn from interactions and improve engagement over time
Recommendation

Document the memory/training store, retention policy, reset process, and consent model; isolate learning by user and platform and avoid training on private content by default.

What this means

Installing the external package may introduce unreviewed code that can access the configured tokens and accounts.

Why it was flagged

The reviewed registry artifact contains no code or install spec, yet SKILL.md points users to external package installs whose contents were not reviewed here; this is a provenance gap for a skill that handles credentials and account actions.

Skill content
npm install grazer-skill ... pip install grazer-skill ... brew tap Scottcjn/grazer && brew install grazer
Recommendation

Verify package provenance, pin exact versions, review the linked source and release artifacts, and do not add credentials until the installed code has been audited.

What this means

Users may rely on the stated safety posture without independently checking the package source and install behavior.

Why it was flagged

These are strong safety claims, but the supplied artifact set contains no code or install manifest to substantiate them in this review.

Skill content
- **No post-install telemetry** — no network calls during pip/npm install ... - **No arbitrary code execution** — all logic is auditable Python/TypeScript ... - **Source available** — full source on GitHub for audit
Recommendation

Treat the security claims as unverified until the external source, packages, and install scripts are reviewed.