Install
openclaw skills install mossDocumentation and capabilities reference for Moss semantic search. Use for understanding Moss APIs, SDKs, and integration patterns.
openclaw skills install mossMoss is the real-time semantic search runtime for conversational AI. It delivers sub-10ms lookups and instant index updates that run in the browser, on-device, or in the cloud - wherever your agent lives. Agents can create indexes, embed documents, perform semantic/hybrid searches, and manage document lifecycles without managing infrastructure. The platform handles embedding generation, index persistence, and optional cloud sync - allowing agents to focus on retrieval logic rather than infrastructure.
| JavaScript | Python | Description |
|---|---|---|
createIndex() | create_index() | Create index with documents |
loadIndex() | load_index() | Load index from storage |
getIndex() | get_index() | Get index metadata |
listIndexes() | list_indexes() | List all indexes |
deleteIndex() | delete_index() | Delete an index |
addDocs() | add_docs() | Add/upsert documents |
getDocs() | get_docs() | Retrieve documents |
deleteDocs() | delete_docs() | Remove documents |
query() | query() | Semantic / hybrid search |
All REST API operations go through POST /v1/manage (base URL: https://service.usemoss.dev/v1) with an action field:
| Action | Purpose | Extra required fields |
|---|---|---|
initUpload | Get a presigned URL to upload index data | indexName, modelId, docCount, dimension |
startBuild | Trigger an index build after uploading data | jobId |
getJobStatus | Check the status of an async build job | jobId |
getIndex | Fetch metadata for a single index | indexName |
listIndexes | Enumerate every index under the project | — |
deleteIndex | Remove an index record and assets | indexName |
getIndexUrl | Get download URLs for a built index | indexName |
addDocs | Upsert documents into an existing index | indexName, docs |
deleteDocs | Remove documents by ID | indexName, docIds |
getDocs | Retrieve stored documents (without embeddings) | indexName |
createIndex() with documents and model options ({ modelId: 'moss-minilm' } in JS; "moss-minilm" string in Python)loadIndex() to prepare index for queriesquery() with search text and topK (JS) or QueryOptions(top_k=...) (Python)Hybrid blending via alpha is available in the Python SDK via QueryOptions:
query() with a QueryOptions object specifying alphaalpha=1.0 = pure semantic, alpha=0.0 = pure keyword, alpha=0.6 = 60/40 blendaddDocs() with new documents (upserts by default — existing IDs are updated)deleteDocs() to remove outdated documents by IDThis is an opt-in integration pattern for voice agent pipelines — it is not automatic behavior of this skill.
query() on each user message to retrieve relevant contextinferedge-moss SDKpipecat-moss package that auto-injects retrieval resultsSDK requires project credentials:
MOSS_PROJECT_ID: Project identifier from Moss PortalMOSS_PROJECT_KEY: Project access key from Moss Portalexport MOSS_PROJECT_ID=your_project_id
export MOSS_PROJECT_KEY=your_project_key
REST API requires the following on every request:
x-project-key header: project access keyx-service-version: v1 header: API versionprojectId field in the JSON bodycurl -X POST "https://service.usemoss.dev/v1/manage" \
-H "Content-Type: application/json" \
-H "x-service-version: v1" \
-H "x-project-key: moss_access_key_xxxxx" \
-d '{"action": "listIndexes", "projectId": "project_123"}'
| Language | Package | Install Command |
|---|---|---|
| JavaScript/TypeScript | @inferedge/moss | npm install @inferedge/moss |
| Python | inferedge-moss | pip install inferedge-moss |
| Pipecat Integration | pipecat-moss | pip install pipecat-moss |
interface DocumentInfo {
id: string; // Required: unique identifier
text: string; // Required: content to embed and search
metadata?: object; // Optional: key-value pairs for filtering
}
| Parameter | SDK | Type | Default | Description |
|---|---|---|---|---|
indexName | JS + Python | string | — | Target index name (required) |
query | JS + Python | string | — | Natural language search text (required) |
topK | JS | number | 5 | Max results to return |
top_k | Python | int | 5 | Max results to return |
alpha | Python only | float | ~0.8 | Hybrid weighting: 0.0=keyword, 1.0=semantic |
filters | JS + Python | object | — | Metadata constraints |
| Model | Use Case | Tradeoff |
|---|---|---|
moss-minilm | Edge, offline, browser, speed-first | Fast, lightweight |
moss-mediumlm | Precision-critical, higher accuracy | Slightly slower |
| Error | Cause | Fix |
|---|---|---|
| Unauthorized | Missing credentials | Set MOSS_PROJECT_ID and MOSS_PROJECT_KEY |
| Index not found | Query before create | Call createIndex() first |
| Index not loaded | Query before load | Call loadIndex() before query() |
| Missing embeddings runtime | Invalid model | Use moss-minilm or moss-mediumlm |
All SDK methods are async — always use await:
// JavaScript
import { MossClient, DocumentInfo } from '@inferedge/moss'
const client = new MossClient(process.env.MOSS_PROJECT_ID!, process.env.MOSS_PROJECT_KEY!)
await client.createIndex('faqs', docs, { modelId: 'moss-minilm' })
await client.loadIndex('faqs')
const results = await client.query('faqs', 'search text', { topK: 5 })
# Python
import os
from inferedge_moss import MossClient, QueryOptions
client = MossClient(os.getenv('MOSS_PROJECT_ID'), os.getenv('MOSS_PROJECT_KEY'))
await client.create_index('faqs', docs, 'moss-minilm')
await client.load_index('faqs')
results = await client.query('faqs', 'search text', QueryOptions(top_k=5, alpha=0.6))
For additional documentation and navigation, see: https://docs.moss.dev/llms.txt