Precision Coral Metrics API

v1.0.1

Analyzes underwater images using YOLOv11 and MobileSAM to precisely segment coral colonies and calculate accurate coral coverage metrics.

0· 150·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mingzhuangwang/coral-precise-coverage-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Precision Coral Metrics API" (mingzhuangwang/coral-precise-coverage-ai) from ClawHub.
Skill page: https://clawhub.ai/mingzhuangwang/coral-precise-coverage-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install coral-precise-coverage-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install coral-precise-coverage-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill describes GPU-heavy models (YOLOv11 + MobileSAM) but does not include model code — instead it calls a remote OpenClaw API gateway, which is coherent for a hosted inference service. Nothing in the package requests unrelated credentials or system access. Minor documentation inconsistencies: the SKILL.md integration example shows POST to /v1/skills/coral-ai/predict while the usage_example.py and other documentation reference /v1/skills/coral-precise-coverage-ai/predict; SKILL.md header version (1.1.0) differs from registry version (1.0.1). These look like copy/edit issues, not malicious misdirection.
Instruction Scope
Runtime instructions and the example script only instruct the agent/user to upload an image to the OpenClaw gateway with an X-OpenClaw-Token and to handle the JSON/base64 result. The instructions do not ask the agent to read system files, environment variables, or contact other endpoints. The only scope concerns are the aforementioned endpoint/path and version wording inconsistencies in documentation.
Install Mechanism
No install spec (instruction-only) and a single example script — nothing is downloaded or written to disk by an installer. This is the lowest-risk install posture.
Credentials
The skill declares no required environment variables and no primary credential in registry metadata, yet SKILL.md and usage_example expect an API token supplied as X-OpenClaw-Token (passed at call time). This is proportionate for a gateway-based service; the only minor inconsistency is that the registry metadata doesn't list the token as a required/primary credential (documentation omission).
Persistence & Privilege
always:false (default) and no code attempts to persist or modify other skill/system settings. The skill does not request persistent presence or elevated privileges.
Assessment
This skill appears to be a hosted inference service that sends your images to the OpenClaw gateway for processing — it does not run models locally or request system credentials. Before installing or using: (1) Confirm you trust the OpenClaw gateway URL (https://api.openclaw.io) and the skill developer for handling potentially sensitive images. (2) Verify billing/pricing (pay-per-invocation) and storage/retention policy for uploaded images and results. (3) Note minor documentation mismatches (endpoint path and version) — confirm the correct endpoint with the provider before automating uploads. (4) Ensure you supply and store your X-OpenClaw-Token securely (the skill itself does not require other secrets). If you need offline/local processing or guaranteed zero-exfiltration, this hosted skill is not appropriate.

Like a lobster shell, security has layers — review code before you run it.

latestvk9755mbypnf16p07gpbejdeeps83gmth
150downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Precision Coral Metrics AI (CM-AI)

⚡ Overview

CM-AI is an industrial-grade vision skill leveraging the YOLOv11 + MobileSAM hybrid architecture. Designed for marine ecologists and environmental agencies, it replaces error-prone manual reef assessments with rapid, pixel-perfect coral coverage analysis.

🚀 Key Capabilities

  1. High-Fidelity Detection: Precisely locates coral colonies in complex, high-noise underwater backgrounds using YOLOv11.
  2. Pixel-Perfect Segmentation: Leverages MobileSAM for refined mask extraction, ensuring accurate area calculation even for overlapping organisms.
  3. Automated Metrics: Instantly calculates the Coral Coverage Ratio (%) and identifies individual colony counts.

📈 Roadmap

  • v1.2.0: Genus-level identification (e.g., Acropora, Brain Coral, Montipora).
  • v1.3.0: Fully automated transect data extraction and biodiversity index analysis.

🔒 Security & Access (OpenClaw Protected)

[!CAUTION] This skill is strictly integrated with the OpenClaw API Gateway. To protect backend GPU resource integrity, direct non-gateway traffic will be automatically blocked. Developer royalties are settled via the OpenClaw monetisation protocol.

🛠️ Integration Example

All requests must follow the OpenClaw standard authentication:

POST https://api.openclaw.io/v1/skills/coral-ai/predict
Headers: X-OpenClaw-Token: <YOUR_TOKEN>
Body: multipart/form-data (key: 'file')

© 2026 @mingzhuangwang | Powered by OpenClaw Ecosystem

Comments

Loading comments...