glass2claw

v2.3.3

Ray-Ban glasses → voice command → WhatsApp → OpenClaw auto-routes your photo into the right database. Hands-free life logging.

1· 650·0 current·0 all-time
byJonathan Jing@jonathanjing
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (vision → WhatsApp → OpenClaw → destination DB) matches the provided materials. The package contains only templates and instructions; it asks the user to provide the Meta/WhatsApp link, an OpenClaw WhatsApp session, and destination DB credentials—these are appropriate and expected for the stated routing functionality.
Instruction Scope
SKILL.md and sample files instruct the agent to detect incoming WhatsApp image messages, classify intent, and forward image URLs to configured specialist sessions using sessions_send and message tools. The scope stays within the described routing task, but it explicitly forwards image URLs to whatever session keys the user configures — so correct session-key configuration is critical. Minor clarity issue: the hub file says 'Do NOT describe or analyze the image yourself' while also instructing to 'classify the image intent from the URL context or any accompanying text' (this implies classification from metadata/text rather than pixel analysis).
Install Mechanism
No install spec or third‑party downloads are included; this is instruction-only. The README suggests using the platform's installer (clawhub) which is expected. Nothing in the package pulls arbitrary code or writes files.
Credentials
The skill declares no required environment variables or credentials. It reasonably expects the user to provide external connections (Meta/WhatsApp, OpenClaw session, destination DB API keys) but does not request or attempt to capture them itself. This is proportionate to the skill's purpose.
Persistence & Privilege
always is false and disable-model-invocation is default (agent may invoke autonomously, which is normal). The skill does not request elevated or permanent system presence and does not modify other skills or system‑wide configs in the provided files.
Assessment
This package is an instruction-only routing template and appears coherent, but before installing: (1) confirm you control the OpenClaw instance and the WhatsApp session used to receive images (do not point it at someone else’s account); (2) only configure session keys that map to trusted destinations — the skill will forward image URLs to whatever session keys you supply; (3) store and supply destination DB credentials yourself and keep them secret (the skill does not request them, but downstream agents will need them); (4) review SAMPLE_AGENT.md and SAMPLE_SOUL_WINE.md and test with non‑sensitive images first to verify routing behavior; (5) be mindful that images may contain PII — only forward to services you trust (Notion, Discord, Airtable, etc.). If you need stronger assurance, ask the author for runnable code, a homepage, or an explicit privacy/security audit of how your OpenClaw instance handles forwarded media.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

👁️ Clawdis
latestvk97cpejfmkbej075gp30vx5k5s829sem
650downloads
1stars
21versions
Updated 1mo ago
v2.3.3
MIT-0

glass2claw: From Your Eyes to Your Database — Instantly

🛠️ Installation

1. Ask OpenClaw (Recommended)

Tell OpenClaw: "Install the glass2claw skill." The agent will handle the installation and configuration automatically.

2. Manual Installation (CLI)

If you prefer the terminal, run:

clawhub install glass2claw

You're wearing your Meta Ray-Ban glasses. You see a wine label, a business card, a tea tin. You say:

"Hey Meta, take a picture and send this to myself on WhatsApp."

That's it. OpenClaw does the rest.

The photo lands in your WhatsApp. OpenClaw's Vision Router picks it up, classifies what it is, and writes a structured entry into the right database — wine cellar, contacts, tea collection, whatever you've set up.

No typing. No app switching. No friction.


📸 How It Works

Meta Ray-Ban glasses
  → "Hey Meta, take a picture and send this to myself on WhatsApp"
      → Meta AI delivers the photo to your WhatsApp
          → OpenClaw (WhatsApp session) receives the image
              → classifies intent: Wine | Tea | Contacts | Cigar | ...
                  → routes to the matching specialist agent
                      → writes structured entry to your database

Your only action is the voice command. Everything downstream is automatic.


🔧 What You Need to Set Up

This skill is a routing protocol — it defines the pattern, not the specific implementation. You bring your own:

  • Meta AI + WhatsApp connection — enable Meta AI on your Ray-Ban glasses and link it to WhatsApp (one-time setup in the Meta View app)
  • OpenClaw with WhatsApp channel — your OpenClaw instance needs a WhatsApp session to receive the incoming images
  • Destination databases — connect whichever databases you want: Notion, Airtable, a local file, a Discord channel. The skill routes to wherever you configure it
  • Database credentials — set up API access for your chosen database yourself (Notion API key, Airtable token, etc.)

The skill templates in this package show one reference implementation using Notion + Discord. Adapt them to your own stack.


🔒 Privacy

This skill processes photos from your personal camera. Images flow from WhatsApp → your OpenClaw instance → your configured destination. Any external services you connect (Notion, Discord, etc.) are governed by their own privacy policies. All routing logic runs on your own OpenClaw instance.


📦 What's Included

  • SAMPLE_AGENT.md — reference routing logic for the hub agent
  • SAMPLE_SOUL_WINE.md — reference persona for a wine specialist agent

Use these as starting points. Customize for your own categories and destinations.


Created by JonathanJing | AI Reliability Architect

Comments

Loading comments...