Doubleword API

v1.0.0

Create, submit, monitor, and retrieve asynchronous batch AI inference jobs via the Doubleword API using JSONL files for large or cost-sensitive workloads.

2· 1.8k·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
Files and SKILL.md consistently describe a Doubleword batch-inference workflow (upload JSONL, create batch, poll, download results) and the included script generates JSONL lines — that matches the stated purpose. However, the package metadata contains no description/homepage and does not declare the API key/environment variable the instructions clearly require, which is an incoherence in the manifest versus runtime needs.
!
Instruction Scope
Runtime instructions call the external domain api.doubleword.ai and instruct use of an Authorization header with $DOUBLEWORD_API_KEY. Aside from the undeclared env var, the instructions stay within the expected scope (create/upload/poll/download). They do not ask the agent to read unrelated system files or secrets, but they do direct network uploads of user-provided JSONL to a third-party endpoint — users should be aware that any data in the batch files will be transmitted off-host.
Install Mechanism
No install spec is present (instruction-only plus a small helper script). No downloads from arbitrary URLs or package installs are performed, so the disk/installation risk is low.
!
Credentials
The SKILL.md repeatedly requires an API key via Authorization: Bearer $DOUBLEWORD_API_KEY, but the skill metadata declares no required env vars or primary credential. Requesting a single service API key would be proportionate for this functionality — the problem is the metadata omission and lack of clarity about expected permissions and where that key is stored or used.
Persistence & Privilege
The skill does not request persistent or elevated privileges; always:false and disable-model-invocation defaults are normal. The skill contains no code that modifies other skills or system-wide configuration.
What to consider before installing
Before installing, verify the following: (1) The skill expects an API key (DOUBLEWORD_API_KEY) but the package metadata doesn't declare it — ask the author to add requires.env and a clear primary credential entry. (2) Confirm the endpoint (https://api.doubleword.ai) and the publisher (provide a homepage or contact) so you can judge trustworthiness; unknown owner + no homepage reduces confidence. (3) Understand that any data in uploaded JSONL files will be sent to the external service; do not include sensitive secrets or PII in batch inputs unless you trust the service and key. (4) Prefer using a scoped/limited API key and test with non-sensitive data first. (5) The included script is a benign JSONL generator (no networking), but double-check network traffic in a controlled environment. If the author cannot justify the missing env declaration and provenance, treat this skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk972eq86w1j9yryhnvph1t0cm98006py
1.8kdownloads
2stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Doubleword Batch Inference

Process multiple AI inference requests asynchronously using the Doubleword batch API.

When to Use Batches

Batches are ideal for:

  • Multiple independent requests that can run simultaneously
  • Workloads that don't require immediate responses
  • Large volumes that would exceed rate limits if sent individually
  • Cost-sensitive workloads (24h window offers better pricing)

Quick Start

Basic workflow for any batch job:

  1. Create JSONL file with requests (one JSON object per line)
  2. Upload file to get file ID
  3. Create batch using file ID
  4. Poll status until complete
  5. Download results from output_file_id

Workflow

Step 1: Create Batch Request File

Create a .jsonl file where each line contains a single request:

{"custom_id": "req-1", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "anthropic/claude-3-5-sonnet", "messages": [{"role": "user", "content": "What is 2+2?"}]}}
{"custom_id": "req-2", "method": "POST", "url": "/v1/chat/completions", "body": {"model": "anthropic/claude-3-5-sonnet", "messages": [{"role": "user", "content": "What is the capital of France?"}]}}

Required fields per line:

  • custom_id: Unique identifier (max 64 chars) - use descriptive IDs like "user-123-question-5" for easier result mapping
  • method: Always "POST"
  • url: Always "/v1/chat/completions"
  • body: Standard API request with model and messages

Optional body parameters:

  • temperature: 0-2 (default: 1.0)
  • max_tokens: Maximum response tokens
  • top_p: Nucleus sampling parameter
  • stop: Stop sequences

File limits:

  • Max size: 200MB
  • Format: JSONL only (JSON Lines - newline-delimited JSON)
  • Split large batches into multiple files if needed

Helper script: Use scripts/create_batch_file.py to generate JSONL files programmatically:

python scripts/create_batch_file.py output.jsonl

Modify the script's requests list to generate your specific batch requests.

Step 2: Upload File

Upload the JSONL file:

curl https://api.doubleword.ai/v1/files \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY" \
  -F purpose="batch" \
  -F file="@batch_requests.jsonl"

Response contains id field - save this file ID for next step.

Step 3: Create Batch

Create the batch job using the file ID:

curl https://api.doubleword.ai/v1/batches \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input_file_id": "file-abc123",
    "endpoint": "/v1/chat/completions",
    "completion_window": "24h"
  }'

Parameters:

  • input_file_id: File ID from upload step
  • endpoint: Always "/v1/chat/completions"
  • completion_window: Choose "24h" (better pricing) or "1h" (50% premium, faster results)

Response contains batch id - save this for status polling.

Step 4: Poll Status

Check batch progress:

curl https://api.doubleword.ai/v1/batches/batch-xyz789 \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY"

Status progression:

  1. validating - Checking input file format
  2. in_progress - Processing requests
  3. completed - All requests finished

Other statuses:

  • failed - Batch failed (check error_file_id)
  • expired - Batch timed out
  • cancelling/cancelled - Batch cancelled

Response includes:

  • output_file_id - Download results here
  • error_file_id - Failed requests (if any)
  • request_counts - Total/completed/failed counts

Polling frequency: Check every 30-60 seconds during processing.

Early access: Results available via output_file_id before batch fully completes - check X-Incomplete header.

Step 5: Download Results

Download completed results:

curl https://api.doubleword.ai/v1/files/file-output123/content \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY" \
  > results.jsonl

Response headers:

  • X-Incomplete: true - Batch still processing, more results coming
  • X-Last-Line: 45 - Resume point for partial downloads

Output format (each line):

{
  "id": "batch-req-abc",
  "custom_id": "request-1",
  "response": {
    "status_code": 200,
    "body": {
      "id": "chatcmpl-xyz",
      "choices": [{
        "message": {
          "role": "assistant",
          "content": "The answer is 4."
        }
      }]
    }
  }
}

Download errors (if any):

curl https://api.doubleword.ai/v1/files/file-error123/content \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY" \
  > errors.jsonl

Error format (each line):

{
  "id": "batch-req-def",
  "custom_id": "request-2",
  "error": {
    "code": "invalid_request",
    "message": "Missing required parameter"
  }
}

Additional Operations

List All Batches

curl https://api.doubleword.ai/v1/batches?limit=10 \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY"

Cancel Batch

curl https://api.doubleword.ai/v1/batches/batch-xyz789/cancel \
  -X POST \
  -H "Authorization: Bearer $DOUBLEWORD_API_KEY"

Notes:

  • Unprocessed requests are cancelled
  • Already-processed results remain downloadable
  • Cannot cancel completed batches

Common Patterns

Processing Results

Parse JSONL output line-by-line:

import json

with open('results.jsonl') as f:
    for line in f:
        result = json.loads(line)
        custom_id = result['custom_id']
        content = result['response']['body']['choices'][0]['message']['content']
        print(f"{custom_id}: {content}")

Handling Partial Results

Check for incomplete batches and resume:

import requests

response = requests.get(
    'https://api.doubleword.ai/v1/files/file-output123/content',
    headers={'Authorization': f'Bearer {api_key}'}
)

if response.headers.get('X-Incomplete') == 'true':
    last_line = int(response.headers.get('X-Last-Line', 0))
    print(f"Batch incomplete. Processed {last_line} requests so far.")
    # Continue polling and download again later

Retry Failed Requests

Extract failed requests from error file and resubmit:

import json

failed_ids = []
with open('errors.jsonl') as f:
    for line in f:
        error = json.loads(line)
        failed_ids.append(error['custom_id'])

print(f"Failed requests: {failed_ids}")
# Create new batch with only failed requests

Best Practices

  1. Descriptive custom_ids: Include context in IDs for easier result mapping

    • Good: "user-123-question-5"
    • Bad: "1", "req1"
  2. Validate JSONL locally: Ensure each line is valid JSON before upload

  3. Split large files: Keep under 200MB limit

  4. Choose appropriate window: Use 24h for cost savings, 1h only when time-sensitive

  5. Handle errors gracefully: Always check error_file_id and retry failed requests

  6. Monitor request_counts: Track progress via completed/total ratio

  7. Save file IDs: Store batch_id, input_file_id, output_file_id for later retrieval

Reference Documentation

For complete API details including authentication, rate limits, and advanced parameters, see:

  • API Reference: references/api_reference.md - Full endpoint documentation and schemas

Comments

Loading comments...