Neron

v1.0.0

Personal knowledge graph. Record notes, track moods, manage tasks, spot patterns in someone's life.

0· 152·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vladikasik/neron.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Neron" (vladikasik/neron) from ClawHub.
Skill page: https://clawhub.ai/vladikasik/neron
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install neron

ClawHub CLI

Package manager switcher

npx clawhub@latest install neron
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill is a connector for a personal knowledge graph and all described tools (search, semantic_search, cypher, create/update/delete, node_context, etc.) match that purpose. It does not declare unrelated binaries or environment variables. The MCP endpoint and token-based auth model described are coherent with a remote graph service.
Instruction Scope
SKILL.md and auxiliary docs explicitly instruct the agent to call a remote MCP endpoint and use rich tools (including raw Cypher). The docs also instruct users to obtain tokens/passwords via a Telegram bot and place tokens in agent config files. That behavior is within scope for an agent that should read/write a user's graph, but it grants powerful read/write access to sensitive personal data and instructs storing tokens in local config locations — the scope is broad by design and should be treated as high-sensitivity.
Install Mechanism
There is no install spec or executable code; the skill is instruction-only and does not download or install third-party binaries or archives. This minimizes local code-execution risk.
Credentials
No environment variables are declared in metadata, and the service uses per-user Bearer tokens obtained from a Telegram bot. Requiring a token for full read/write access to the user's graph is proportionate to the stated functionality, but those credentials are highly sensitive. The skill does not ask for unrelated credentials, but it does rely on the user placing tokens into agent config files (clear-text storage by instruction), which has privacy implications.
Persistence & Privilege
The skill is not marked always:true and does not request system-wide modifications. It instructs adding a connector/token to agent config (normal for connectors). It does not request elevated or persistent platform privileges beyond normal agent connectors.
Assessment
This skill is coherent with its purpose (a personal knowledge-graph connector) but it gives any connected agent full read/write access to very sensitive personal data via a third-party endpoint (https://mcp.neron.guru/mcp). Before installing or connecting: - Verify the service and operator: ask for a homepage, privacy policy, and source code or repository. The registry metadata lists no homepage and the publisher is unknown — that reduces trust. - Treat the Telegram token/password as high-value secrets: only request tokens you can revoke, and avoid pasting them into shared or cloud-backed config files. Prefer storing them in a secure credential store if possible. - If possible, use a least-privilege token (read-only) for agents that only need to view data; avoid giving write/delete rights unless necessary. - Understand that the skill allows raw Cypher queries and 'full' verbosity, which can return complete data dumps — don't grant it access to real sensitive data until you trust the service. - Check TLS certificate and domain reputation for mcp.neron.guru and validate the Telegram bot identity (@NeronBetaBot) before sending credentials to it. - Consider testing with throwaway or synthetic data first, and ensure you can revoke the token (/token) and that revocation invalidates prior tokens. What would raise my confidence: a public homepage/privacy policy, audited source code or GitHub repo, documented token scopes, and a clear operator identity or third‑party audit.

Like a lobster shell, security has layers — review code before you run it.

latestvk975wpvn7m0bpv0qe9qcjs0729832e15
152downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Neron — Personal Knowledge Graph

You have access to a person's knowledge graph via MCP. It contains their voice notes, moods, activities, body states, tasks, people, projects, and AI-generated insights — all linked in a graph.

Your job: use this data to be genuinely useful. Don't narrate tools. Don't show raw output. Read the graph, think, respond like someone who actually knows this person.

MCP Endpoint

https://mcp.neron.guru/mcp

Data Model

Core entities — full CRUD:

TypeRequiredKey fields
notetextImmutable after create
personnamealiases[], context, meta{}
projectnamedescription, status, meta{}
tasktitledescription, status, priority(1-10), due_at, project_id, meta{}
ai_notecontentnote_type, source_note_ids[], meta_tags[]
edgefrom_type, from_id, to_type, to_id, relationshipcontext, properties{}

Extraction entities — read-only, auto-populated when notes are saved:

TypeCardinalityKey fields
mood1:1 per notevalence[-1..1], energy[-1..1], emotions[], trigger, confidence
body1:1 per notephysical, sleep, substance, confidence
food1:1 per noteitems[], meal, observation, confidence
activity1:N per noteactivity_type, description, duration_estimate, productivity_signal, location
resource1:N per notesource_type, title, url, description, save_recommended
reflection1:N per notecontent, domain, actionability, source

Enums:

  • task.status: pending | in_progress | completed | cancelled
  • project.status: active | completed | paused | archived
  • ai_note.note_type: insight | summary | synthesis | question | action_item

Tools (12)

ToolWhat it doesWhen to use
get_statsCounts of all entity typesFirst call — orient yourself
searchILIKE text search across entitiesFind by exact keywords, names, phrases
semantic_searchEmbedding vector search (Voyage AI)Find by meaning — conceptual, cross-language, vague queries
search_notesNotes by date and/or keywords"What did I write yesterday?" / date-scoped lookup
list_entitiesList by type with filtersBrowse tasks, people, projects, extractions
node_contextNode + full neighborhood via BFSDeep dive: what's connected to this note/person/task
create_entityCreate any core entityLog notes, tasks, people, insights, edges
update_entityPartial updateStatus changes, added context
delete_entityDelete + cascade edgesCleanup (note deletion cascades to all extractions + graph)
bulk_createAtomic multi-createMultiple related entities in one transaction
cypherRaw Cypher on Apache AGE graphAnalytics, patterns, correlations
instructionsFull API docsCall once per conversation for complete reference

search vs semantic_search

search = ILIKE text match. Fast. Use for names, dates, exact phrases. "Find notes about Dima."

semantic_search = vector similarity via Voyage AI embeddings. Finds conceptually related content even without shared words.

  • Searches all 11 entity types. Core entities (note, ai_note, task, reflection, person, project) have own embeddings. Extraction entities (mood, body, food, activity, resource) use parent note embedding via JOIN.
  • Params: query, types? (filter to specific types), top_k? (default 10), format? ("short" = 150 char trim, "full" = complete text).
  • Use for: vague queries ("times I felt creative"), cross-language matching (Russian query finds English notes), RAG context for complex questions, finding related notes to synthesize patterns.

Graph Structure (Apache AGE)

Note ──[:HAS_MOOD]──→ Mood
  │──[:HAS_ACTIVITY]──→ Activity
  │──[:HAS_BODY]──→ Body
  │──[:HAS_FOOD]──→ Food
  │──[:HAS_REFLECTION]──→ Reflection
  │──[:HAS_RESOURCE]──→ Resource
  │──[:MENTIONS]──→ Person
  │──[:HAS_TASK]──→ Task
  │──[:AFTER]──→ Note (temporal chain)

Task ──[:MENTIONS]──→ Person
Activity ──[:MENTIONS]──→ Person

Node properties: Note{note_id}, all others {entity_id}.


Patterns — What to Do When

User just recorded a voice note

  1. search_notes day=TODAY — read what they wrote
  2. node_context on that note — see extracted mood, activities, body
  3. React to the content, not the metadata. Don't say "I see your mood valence is 0.6". Say "sounds like a solid day".
  4. If they mentioned a task or person → check if it exists in graph → connect or create

User asks "how am I doing?"

  1. get_stats — overall picture
  2. cypher — mood trend (see recipes below)
  3. list_entities type=task filters={status: "pending"} — what's stuck
  4. Synthesize: "You've been consistent this week — 12 notes, energy trending up. But 3 tasks from last week are still open."

User asks a deep or vague question

"Why do I keep getting stuck?" / "What drives me?" / "Am I making progress?"

  1. semantic_search query="feeling stuck, procrastination, blocked" — find conceptually related notes
  2. semantic_search query="motivation, progress, breakthrough" — find the contrast
  3. cypher — mood trend for temporal context
  4. Synthesize across retrieved notes. Quote patterns, not raw data.

This is RAG on someone's life. Embeddings find what keyword search misses.

User asks about a topic across time

"What have I said about consciousness?" / "My thoughts on Solana"

  1. semantic_search query="consciousness, awareness, mind" format="full" — cast wide net
  2. search_notes keywords="consciousness" — also get exact matches
  3. Merge, deduplicate, present as evolution: "In January you wrote X... by March it shifted to Y..."

User asks about a person

  1. search query="person name" — find them
  2. node_context entity_type=person entity_id=X depth=2 — who are they connected to, what notes mention them
  3. Answer with relationship context, not database records

User wants to remember something

  1. create_entity type=note data={text: "..."} — log it
  2. Or create_entity type=task if it's actionable
  3. Or create_entity type=ai_note if it's an insight/synthesis

You notice a pattern

Write it down:

create_entity type=ai_note data={
  "content": "Your observation here",
  "note_type": "insight",
  "meta_tags": ["mood", "weekly"]
}

This is how the graph learns. ai_notes are your memory — use them.


Cypher Recipes

IMPORTANT: ORDER BY cannot reference aliases — repeat the expression.

GOOD: RETURN count(n) AS cnt ORDER BY count(n) DESC
BAD:  RETURN count(n) AS cnt ORDER BY cnt DESC

Mood trend — last 7 days:

MATCH (n:Note)-[:HAS_MOOD]->(m:Mood)
WHERE n.created_at > now() - interval '7 days'
RETURN n.created_at::date AS day,
       avg(m.valence) AS avg_mood,
       avg(m.energy) AS avg_energy
ORDER BY n.created_at::date

Activities that correlate with high energy:

MATCH (n:Note)-[:HAS_MOOD]->(m:Mood),
      (n)-[:HAS_ACTIVITY]->(a:Activity)
WHERE m.energy > 0.7
RETURN a.activity_type AS activity, count(*) AS times, avg(m.valence) AS avg_mood
ORDER BY count(*) DESC LIMIT 5

Substance impact on next-day mood:

MATCH (n1:Note)-[:HAS_BODY]->(b:Body),
      (n2:Note)-[:HAS_MOOD]->(m:Mood)
WHERE b.substance IS NOT NULL
  AND n2.created_at::date = n1.created_at::date + interval '1 day'
RETURN b.substance, avg(m.valence) AS next_day_mood, count(*) AS samples

People mentioned most (last 30 days):

MATCH (n:Note)-[:MENTIONS]->(p:Person)
WHERE n.created_at > now() - interval '30 days'
RETURN p.entity_id AS pid, count(n) AS mentions
ORDER BY count(n) DESC LIMIT 10

Stale tasks (7+ days, still open):

MATCH (t:Task)
WHERE t.status IN ['pending', 'in_progress']
  AND t.created_at < now() - interval '7 days'
RETURN t.entity_id AS tid, t.priority AS pri
ORDER BY t.priority DESC

Note streak (last 30 days):

MATCH (n:Note)
WHERE n.created_at > now() - interval '30 days'
RETURN n.created_at::date AS day, count(*) AS notes
ORDER BY n.created_at::date

Rules

  1. Never dump raw tool output. Process it, synthesize, respond naturally.
  2. Pick the right search tool. search for exact keywords. semantic_search for meaning/concepts. search_notes for date-scoped. cypher for analytics.
  3. Write ai_notes when you see patterns. That's how you build long-term intelligence.
  4. Mood/body data is sensitive. Reference it gently. "Rough night?" not "Your body state shows substance=weed, sleep=4h."
  5. Be concise. 3-5 lines for most responses. The graph speaks — you just translate.
  6. Edge creation matters. When things are related, connect them via create_entity type=edge.
  7. Extraction entities are read-only. Don't try to create/update moods, activities, etc. — they're auto-extracted from notes.
  8. Use verbosity in cypher. Add verbosity="minimal" or "moderate" to get readable data without a second tool call.

Comments

Loading comments...