Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

google-sheets-soha

v1.0.2

Read and analyze data from Google Sheets. Trigger when the user mentions "Google Sheet", "spreadsheet", "sheet", sends a docs.google.com/spreadsheets link, o...

0· 88·0 current·0 all-time
byNguyễn Tiến Phan@fuco99

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for fuco99/google-sheets-soha.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "google-sheets-soha" (fuco99/google-sheets-soha) from ClawHub.
Skill page: https://clawhub.ai/fuco99/google-sheets-soha
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: [object Object]
Required binaries: python3, curl
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install google-sheets-soha

ClawHub CLI

Package manager switcher

npx clawhub@latest install google-sheets-soha
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the actual behavior: fetching Google Sheets via Sheets API v4 using either a public API key or a service account, and using python3/curl for fetches. The declared binaries and primary credential are appropriate for the stated capability.
Instruction Scope
SKILL.md instructs the agent to fetch sheet metadata and values, run local python3 scripts, and cache results on disk under ~/.openclaw/workspace/.cache/sheets. This is within scope for a Sheets-reading skill, but the skill persists spreadsheetId in session context and caches sheet contents locally (TTL default 5 minutes) — users should be aware cached sheet contents and the remembered Sheet ID are stored on disk and used for subsequent turns.
Install Mechanism
Instruction-only skill with no install/spec downloads. No code is pulled from external URLs during install — lowest-risk install mechanism.
Credentials
Only Google-related credentials are requested (GOOGLE_API_KEY for public sheets, GOOGLE_SERVICE_ACCOUNT_JSON for private sheets). That is proportional. Minor inconsistency: the frontmatter marks the env vars as not required while primaryEnv is set to GOOGLE_SERVICE_ACCOUNT_JSON and the registry metadata shows malformed env entries ([object Object]) — likely a metadata parsing issue but worth verifying before enabling.
Persistence & Privilege
always:false and agent-invocation allowed (normal). The skill writes cache files under its own workspace path and stores session context in-memory for the conversation; it does not request system-wide privileges or alter other skills' configuration.
Assessment
This skill appears to do what it claims, but check the following before enabling: (1) It needs either a Google API key (public sheets) or a Service Account JSON file path (private sheets). Only provide the minimum credential required. (2) The skill caches sheet contents and remembers the active Sheet ID under ~/.openclaw/workspace/.cache/sheets — if that data is sensitive, consider the cache TTL or clear the cache after use. (3) The repository/homepage fields are placeholders (github.com/your-username/...) — verify the source/trustworthiness of the published repo and maintainer before installing. (4) There is a minor metadata parsing glitch (registry shows [object Object]) — confirm the env var configuration in your OpenClaw config matches what the SKILL.md frontmatter declares.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3, curl
Env[object Object], [object Object]
Primary envGOOGLE_SERVICE_ACCOUNT_JSON
latestvk9780d1zbnnjkqd5yqn331xnfs843abn
88downloads
0stars
3versions
Updated 3w ago
v1.0.2
MIT-0

Google Sheets Skill

Fetches data from Google Sheets via the Google Sheets API v4, caches it on disk, and answers user questions about the data.


Session Memory

Maintain the following context throughout the conversation. Update it as new information is learned:

SHEET_CONTEXT = {
  spreadsheetId: null,   // Active Sheet ID
  activeTab: null,       // Current tab being worked on
  tabs: [],              // Cached list of tab names
  headers: {},           // Cached headers per tab: { tabName: [...] }
  rawData: {},           // Cached rows per tab: { tabName: [[...]] }
  cacheFile: null,       // Path to on-disk cache file
}

Rules:

  • Once spreadsheetId is known → use it for all subsequent turns, never ask again
  • Once a cache file exists and is within TTL → skip the API call
  • Always check session context before asking the user for anything

Step 1 — Get the Sheet ID

Check in this order:

  1. SHEET_CONTEXT.spreadsheetId already set → use it directly
  2. URL in the message → extract the ID between /d/ and /edit: https://docs.google.com/spreadsheets/d/**SHEET_ID**/edit
  3. User provides the ID directly → save and use it
  4. Not found anywhere → ask exactly once:

"Could you share the Google Sheet link or Sheet ID? I'll remember it for the rest of our conversation 😊"

Once received → save to SHEET_CONTEXT.spreadsheetId immediately and proceed.


Step 2 — Fetch Data from Google Sheets API

Use Google Sheets API v4. Choose the auth method based on the sheet type:

Option A: Public sheet (Anyone with the link)

# List tabs
curl -s "https://sheets.googleapis.com/v4/spreadsheets/{SHEET_ID}?key={GOOGLE_API_KEY}&fields=sheets.properties" \
  | python3 -c "import sys,json; d=json.load(sys.stdin); [print(s['properties']['title']) for s in d['sheets']]"

# Fetch tab data
curl -s "https://sheets.googleapis.com/v4/spreadsheets/{SHEET_ID}/values/{TAB_NAME}!A1:Z1000?key={GOOGLE_API_KEY}" \
  | python3 -c "import sys,json; d=json.load(sys.stdin); print(json.dumps(d.get('values',[])))"

Option B: Private sheet (Service Account)

import os, json
from google.oauth2 import service_account
from googleapiclient.discovery import build

creds = service_account.Credentials.from_service_account_file(
    os.environ["GOOGLE_SERVICE_ACCOUNT_JSON"],
    scopes=["https://www.googleapis.com/auth/spreadsheets.readonly"]
)
service = build("sheets", "v4", credentials=creds)

# List tabs
meta = service.spreadsheets().get(spreadsheetId=SHEET_ID).execute()
tabs = [s["properties"]["title"] for s in meta["sheets"]]

# Fetch tab data
result = service.spreadsheets().values().get(
    spreadsheetId=SHEET_ID,
    range=f"{TAB_NAME}!A1:Z1000"
).execute()
rows = result.get("values", [])

Save results to SHEET_CONTEXT.headers[tabName] and SHEET_CONTEXT.rawData[tabName].


Step 3 — Disk Cache

Cache fetched data to avoid redundant API calls across turns.

Cache path

~/.openclaw/workspace/.cache/sheets/{spreadsheetId}/{tabName}.json

Cache file structure

{
  "spreadsheetId": "abc123",
  "tabName": "Sheet1",
  "fetchedAt": 1710000000,
  "ttl": 300,
  "headers": ["Name", "Status", "Date"],
  "rows": [
    ["Task A", "Done", "2024-01-01"],
    ["Task B", "Pending", "2024-01-02"]
  ]
}

Cache script (run via exec tool)

import os, json, time, shutil

CACHE_DIR = os.path.expanduser("~/.openclaw/workspace/.cache/sheets")
TTL = 300  # 5 minutes — increase to 3600 for rarely-changing data

def cache_path(sheet_id, tab):
    d = os.path.join(CACHE_DIR, sheet_id)
    os.makedirs(d, exist_ok=True)  # auto-creates on first use
    return os.path.join(d, f"{tab.replace('/', '_')}.json")

def load_cache(sheet_id, tab):
    path = cache_path(sheet_id, tab)
    if not os.path.exists(path):
        return None
    with open(path) as f:
        c = json.load(f)
    if time.time() - c.get("fetchedAt", 0) > c.get("ttl", TTL):
        return None  # expired — will re-fetch
    return c

def save_cache(sheet_id, tab, headers, rows):
    path = cache_path(sheet_id, tab)
    with open(path, "w") as f:
        json.dump({
            "spreadsheetId": sheet_id,
            "tabName": tab,
            "fetchedAt": int(time.time()),
            "ttl": TTL,
            "headers": headers,
            "rows": rows
        }, f, ensure_ascii=False)

def clear_cache(sheet_id=None):
    target = os.path.join(CACHE_DIR, sheet_id) if sheet_id else CACHE_DIR
    if os.path.exists(target):
        shutil.rmtree(target)

Cache flow

cached = load_cache(SHEET_ID, TAB_NAME)
if cached:
    headers, rows = cached["headers"], cached["rows"]
else:
    # fetch from API...
    headers, rows = fetched_rows[0], fetched_rows[1:]
    save_cache(SHEET_ID, TAB_NAME, headers, rows)

When to clear cache

User saysAction
"refresh", "reload", "get latest data"clear_cache(SHEET_ID) then re-fetch
"clear cache"clear_cache() — wipes everything
TTL expiredAutomatically re-fetches on next request
User switches to a new sheetKeep old cache, create new cache for the new sheet

Step 4 — Answer the User

  • Always state which sheet/tab the data is from
  • Use markdown tables when displaying multiple rows
  • Reply in the same language as the user
  • If data exceeds 500 rows, analyze the first 200 and ask if the user wants to narrow the range

Configuration in openclaw.json

Add under skills.entries (top-level, not inside agents):

Public sheet

{
  "skills": {
    "entries": {
      "google-sheets-soha": {
        "enabled": true,
        "env": {
          "GOOGLE_API_KEY": "AIza..."
        }
      }
    }
  }
}

Private sheet

{
  "skills": {
    "entries": {
      "google-sheets-soha": {
        "enabled": true,
        "env": {
          "GOOGLE_SERVICE_ACCOUNT_JSON": "/home/node/.openclaw/google-sa.json"
        }
      }
    }
  }
}

Error Handling

SituationAction
Sheet ID not providedAsk once, save when received
API returns 403Sheet is private → guide user to share with service account email
API returns 404Wrong Sheet ID → ask again
GOOGLE_API_KEY not setGuide user to add it in openclaw.json
Tab not foundList SHEET_CONTEXT.tabs and ask user to pick
Data too largeAnalyze first 200 rows, notify user
python3 not foundRun: apt-get install -y python3 inside container

Comments

Loading comments...