Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Super Personasiled Search

Build, debug, and extend the Connectify founder network platform (React/Vite frontend + Express backend + Redis cache + OpenAI ranking + Apify ingestion). Us...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 20 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md align with the described purpose: a React + Express app that uses Redis, OpenAI, and (optionally) Apify. However the registry metadata claims no required environment variables or homepage/source while SKILL.md and package.json clearly require OPENAI_API_KEY, REDIS_URL, and reference an APIFY_TOKEN and a GitHub repo. The missing metadata declarations are an inconsistency (not necessarily malicious) that reduces transparency.
Instruction Scope
SKILL.md instructions are narrowly scoped to local development of the repo: npm install, set .env, start Redis, run dev/build commands, and where to edit scoring/ingestion code. The instructions do not ask the agent to read unrelated system files or exfiltrate data to unexpected endpoints. They do instruct creating an .env containing secrets (standard for this project).
Install Mechanism
There is no install spec (instruction-only), so nothing will be automatically downloaded by the platform installer. Running npm install locally will pull many dependencies (openai, apify, crawlee, redis, etc.) which is expected for this stack but means arbitrary third-party packages will be executed when you run the project — review dependencies before running in production.
!
Credentials
The SKILL.md and code require sensitive credentials (OPENAI_API_KEY, REDIS_URL, APIFY_TOKEN). Those are proportionate to the app's functionality (scoring via OpenAI, storing/querying via Redis, optional Apify ingestion) — but the registry metadata did not declare any required env vars, which is a transparency gap. Also note that connection records (personal data) are sent to OpenAI for scoring/action-generation and that Redis access allows reading/writing all connection keys. APIFY_TOKEN is presently unused (apify.js is a stub) but the dependency and instructions suggest future crawling capabilities; treat that token carefully.
Persistence & Privilege
The skill is not force-included (always: false) and does not request elevated platform privileges. It does persist data to Redis (saves connection and query-context keys) which is expected for its function. No code attempts to modify other skills or global agent config.
Scan Findings in Context
[openai_chat_completions_usage] expected: agent.js sends connection payloads and queries to OpenAI chat completions for scoring and action generation. This is central to the described functionality but means personal connection data will be transmitted to the OpenAI API — review privacy implications.
[redis_keys_wildcard_enumeration] expected: redis.js uses redis.keys('connection:*') to enumerate connections. This is simple for dev but can be inefficient and dangerous on very large datasets or shared Redis instances (it will enumerate all matching keys).
[apify_dependency_present_but_stubbed] expected: apify.js currently returns local placeholder data (no network calls). package.json and package-lock include apify/crawlee dependencies. If you replace the stub with a real Apify actor call, the APIFY_TOKEN could enable web crawling or data ingestion — review any added code before enabling it.
What to consider before installing
This repo looks like a legitimate local development project for the Connectify app, but be cautious before running it with real secrets or production data. Specific points to check before you install/run: (1) Confirm the publisher/source (registry metadata says unknown but package.json points to a GitHub repo) and only use code from a trusted origin. (2) The SKILL.md expects OPENAI_API_KEY, REDIS_URL, and APIFY_TOKEN — provide these only in a local/isolated environment and never commit them. (3) Be aware that the app sends connection records to the OpenAI API for scoring/action suggestions; if those records contain sensitive PII, consider anonymizing or using a policy that permits such transmission. (4) Redis is used to store/read all connection:* keys — restrict access and avoid running this against a production Redis instance with other data. (5) If you enable real Apify/crawling, audit the ingestion code and required tokens first. If anything is unclear, ask the skill author for an explicit manifest listing required env vars and the canonical source/repo before proceeding.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk979h9hqtfyz2mkm9ftshrqrsd83ktmklatest super personalied searchvk979h9hqtfyz2mkm9ftshrqrsd83ktmk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Connectify Development Guide

Set up the project

  1. Install dependencies:
    npm install
    
  2. Create .env from .env.example and set:
    • OPENAI_API_KEY
    • REDIS_URL
    • APIFY_TOKEN
    • optional OPENAI_MODEL, PORT
  3. Start Redis before running the backend.

Run the app

Prefer single-service mode when validating full user flows (dashboard + chat + API):

npm run build
npm start

Open http://localhost:3001.

Use split mode only when focusing on one side:

  • Frontend only: npm run dev
  • Backend only: npm run dev:server

Use the file map

  • server.js: Express API, Redis seeding, /api/query, static hosting of dist/.
  • agent.js: OpenAI relevance scoring and follow-up action generation.
  • redis.js: Redis connection lifecycle, connection storage, query-context cache (30 min TTL).
  • apify.js: Connection ingestion adapter (currently placeholder dataset).
  • src/components/AIChatPanel.jsx: chat UX and /api/query client call.
  • src/data/placeholders.js: dashboard placeholder cards/lists/map seed data.

Preserve the backend response contract

Return this shape from /api/query:

{
  "results": [
    {
      "name": "string",
      "role": "string",
      "company": "string",
      "platforms": ["string"],
      "relevanceScore": 0,
      "reason": "string",
      "suggestedActions": ["string", "string"]
    }
  ]
}

If changing fields, update both server.js and src/components/AIChatPanel.jsx together.

Implement real Apify ingestion

When replacing the stub in apify.js:

  1. Keep output normalized to this connection schema:
    • id, name, role, company, location, platforms, tags, lastInteraction, notes
  2. Keep IDs stable and unique to prevent duplicate Redis records.
  3. Return an array compatible with saveConnection(connection.id, connection).
  4. Keep actor/network logic isolated in apify.js; avoid spreading Apify-specific code through server.js.

Tune AI behavior safely

When editing agent.js:

  1. Keep response_format: { type: 'json_object' }.
  2. Keep strict parsing and fallback handling (safeJsonParse, bounded score 0-100).
  3. Keep deterministic-ish scoring temperature low and action generation temperature moderate.
  4. Preserve fallback actions in server.js if action generation fails.

Validate changes quickly

  1. Build frontend:
    npm run build
    
  2. Start server:
    npm start
    
  3. Smoke test query endpoint:
    curl -X POST http://localhost:3001/api/query \
      -H "Content-Type: application/json" \
      -d "{\"query\":\"Who in my network works in AI and is based in SF?\",\"sessionId\":\"local-test-session\"}"
    
  4. Confirm the response includes ranked results and cached repeat requests return quickly.

Watch for common pitfalls

  • npm run dev serves only frontend; /api/query will not work there unless a proxy/backend is also configured.
  • server.js CORS currently allows http://localhost:3000; adjust if using different local origins.
  • redis.js uses keys('connection:*'); avoid very large production datasets without pagination/scans.
  • Do not commit secrets from .env or hardcode API tokens.

Files

24 total
Select a file
Select a file to preview.

Comments

Loading comments…