Telnyx Intelligence Provider

Telnyx text-inference provider for OpenClaw — native OpenAI-compatible chat completions via Telnyx AI, no LiteLLM proxy needed

Audits

Pass

Install

openclaw plugins install clawhub:telnyx-intelligence-provider

telnyx-openclaw-intelligence

Native Telnyx AI text-inference provider for OpenClaw.

Routes OpenAI-compatible chat completion requests directly to the Telnyx AI Inference API — no LiteLLM proxy needed.

  • OpenAI-compatible API endpoint
  • Full streaming (SSE) support
  • Bearer token auth via TELNYX_API_KEY
  • Access to the live Telnyx AI model catalog: Llama 3.x, Qwen, Kimi, MiniMax, GLM, Gemma, OpenAI GPT, Claude, Gemini, Groq OSS, and more

Requirements


Installation

From local source (development)

openclaw plugins install /path/to/telnyx-openclaw-intelligence

Or from your project root:

cd ~/projects/telnyx-openclaw-intelligence
npm install && npm run build
openclaw plugins install .

From ClaWHub (once published)

clawhub package install telnyx-openclaw-intelligence

Configuration

Set your Telnyx API key:

export TELNYX_API_KEY="KEY_your_api_key_here"

Or configure it in openclaw.json:

{
  "models": {
    "providers": {
      "telnyx": {
        "apiKey": "KEY_your_api_key_here"
      }
    }
  }
}

Select a model

# Use the default model (meta-llama/Meta-Llama-3.1-70B-Instruct)
/model telnyx/meta-llama/Meta-Llama-3.1-70B-Instruct

# Or configure a default in openclaw.json
{
  "agents": {
    "defaults": {
      "model": "telnyx/meta-llama/Meta-Llama-3.1-70B-Instruct"
    }
  }
}

Available Models

ModelNotes
meta-llama/Meta-Llama-3.1-70B-InstructDefault — strong general-purpose Llama 3.1 model
meta-llama/Meta-Llama-3.1-8B-InstructFast, lightweight Llama 3.1 model
meta-llama/Llama-3.3-70B-InstructLlama 3.3 generation model
Qwen/Qwen3-235B-A22BLarge MoE reasoning model
moonshotai/Kimi-K2.6Long-context Kimi model
moonshotai/Kimi-K2.5Long-context Kimi model
MiniMaxAI/MiniMax-M2.7Long-context MiniMax model
zai-org/GLM-5.1-FP8GLM model
google/gemma-2b-itTiny, fast Gemma model
openai/gpt-5OpenAI GPT-5
openai/gpt-5.1OpenAI GPT-5.1
openai/gpt-5.2OpenAI GPT-5.2
openai/gpt-4.1OpenAI GPT-4.1
openai/gpt-4oOpenAI GPT-4o
openai/gpt-4o-miniOpenAI GPT-4o mini
anthropic/claude-haiku-4-5Claude Haiku
anthropic/claude-opus-4-6Claude Opus
google/gemini-2.5-flashGemini Flash
Groq/gpt-oss-120bGroq-hosted OSS model

Note: The model list evolves as Telnyx adds/removes models. Use the Telnyx models endpoint to see the current list:

curl -s https://api.telnyx.com/v2/ai/models -H "Authorization: Bearer $TELNYX_API_KEY" | jq '.data[].id'

API Endpoint

POST https://api.telnyx.com/v2/ai/chat/completions
Authorization: Bearer TELNYX_API_KEY
Content-Type: application/json

Example request

{
  "model": "meta-llama/Meta-Llama-3.1-70B-Instruct",
  "messages": [
    { "role": "system", "content": "You are a personal assistant." },
    { "role": "user", "content": "What is Telnyx?" }
  ],
  "stream": true,
  "temperature": 0.7,
  "max_tokens": 1024
}

Live smoke test

curl -s -X POST https://api.telnyx.com/v2/ai/chat/completions \
  -H "Authorization: Bearer $TELNYX_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3.1-70B-Instruct",
    "messages": [{"role": "user", "content": "Say hello in one sentence."}],
    "max_tokens": 50,
    "stream": false
  }' | jq '.choices[0].message.content'

Development

# Install dependencies (resolves against local openclaw)
npm install

# Build (compiles TypeScript to dist/)
npm run build

# Run unit tests
npm test

# Type-check only (no output)
npm run lint

Project structure

telnyx-openclaw-intelligence/
├── index.ts                        # Plugin entry point
├── src/
│   ├── intelligence-provider.ts    # Provider registration (registerProvider)
│   ├── webchat-completions.ts         # Shared Telnyx model/provider helpers
│   └── webchat-completions.test.ts    # Unit tests for model/base URL/catalog behavior
├── openclaw.plugin.json          # Plugin manifest
├── package.json
├── tsconfig.json                   # Lint (noEmit)
├── tsconfig.build.json             # Build (with output)
├── README.md
├── LICENSE
└── SKILL.md

Architecture Notes

  • No proxy: The plugin calls Telnyx directly. No LiteLLM or middleware.
  • OpenAI-compat transport: OpenClaw's built-in openai-completions transport is reused.
  • Model normalization: The telnyx/ prefix is stripped before requests (e.g., telnyx/meta-llama/...meta-llama/...).
  • Auth: Standard OpenClaw API-key auth flow — env var or openclaw setup telnyx.

Related Plugins


License

MIT © Telnyx LLC