Macrocosmos

v1.0.4

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API.

2· 609·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md describes exactly the claimed capability (fetching X/Reddit data via Macrocosmos SN13). The API endpoints, request/response format, and examples align with the stated purpose. However, registry metadata (source/homepage/required env vars) does not match the SKILL.md: the registry lists no required env var or homepage/source, while SKILL.md requires an MC_API key and points to GitHub/PyPI. That metadata mismatch is an incoherence to resolve before trusting the skill.
Instruction Scope
The instructions are narrowly scoped to making POST requests to the Macrocosmos SN13 endpoint (and using a Python SDK) and do not instruct the agent to read arbitrary local files or exfiltrate data to unrelated endpoints. Example calls clearly show the API key being used only for requests to constellation.api.cloud.macrocosmos.ai.
Install Mechanism
This is an instruction-only skill with no install spec (lowest install risk). However, SKILL.md references a Python SDK ('macrocosmos' on PyPI) but provides no install guidance. That omission is not necessarily malicious but is an operational inconsistency: an agent may try to install the SDK or fail at runtime if the client is expected but not present.
!
Credentials
SKILL.md requires a secret MC_API environment variable (used as a Bearer token). That credential is proportionate to the skill's function, but the skill registry metadata did not declare any required env vars or primary credential. The missing declaration is a material inconsistency: users may not be warned that the skill needs a secret, and automated permission controls may not be applied. Verify the MC_API scope/permissions before providing it and confirm the registry metadata is corrected.
Persistence & Privilege
The skill does not request always:true, does not ask to modify agent/system configuration, and does not request persistent system privileges. Default autonomy (disable-model-invocation:false) is normal and not by itself concerning.
What to consider before installing
Do not install or supply secrets until the metadata mismatch is resolved. Specific actions to consider: 1) Ask the skill publisher to update registry metadata to declare MC_API as a required secret and to provide the GitHub/PyPI homepage/source links they cite in SKILL.md. 2) Verify the upstream repository (https://github.com/macrocosm-os/macrocosmos-mcp) and PyPI package to ensure the package and endpoint are legitimate. 3) If you must test, create a low-privilege or limited-use MC_API key and avoid using high-privilege credentials. 4) Confirm the domain (constellation.api.cloud.macrocosmos.ai) is the intended recipient of the key and review the provider's privacy/data retention policy. 5) Request an explicit install spec if you expect the Python SDK to be used (so installations are visible/auditable). The current inconsistencies could be innocent (metadata omission) but should be fixed before granting access to secrets.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b6qwsk023qakxfbcxvassgd819300
609downloads
2stars
5versions
Updated 1mo ago
v1.0.4
MIT-0

Macrocosmos SN13 API - Social Media Data Skill

Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.

Metadata

Required Environment Variables

VariableRequiredTypeDescription
MC_APIYessecretMacrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys

Setup: The MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.


API Endpoint

POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData

Headers

Content-Type: application/json
Authorization: Bearer <YOUR_MC_API_KEY>

Request Format

{
  "source": "X",
  "usernames": ["@elonmusk"],
  "keywords": ["AI", "bittensor"],
  "start_date": "2026-01-01",
  "end_date": "2026-02-10",
  "limit": 10,
  "keyword_mode": "any"
}

Parameters

ParameterTypeRequiredDescription
sourcestringYes"X" or "REDDIT" (case-sensitive)
usernamesarrayNoUp to 5 usernames. @ optional. X only (not available for Reddit)
keywordsarrayNoUp to 5 keywords/hashtags. For Reddit: use subreddit format "r/subreddit"
start_datestringNoYYYY-MM-DD or ISO format. Defaults to 24h ago
end_datestringNoYYYY-MM-DD or ISO format. Defaults to now
limitintNo1-1000 results. Default: 10
keyword_modestringNo"any" (default) matches ANY keyword, "all" requires ALL keywords

Response Format

{
  "data": [
    {
      "datetime": "2026-02-10T17:30:58Z",
      "source": "x",
      "text": "Tweet content here",
      "uri": "https://x.com/username/status/123456",
      "user": {
        "username": "example_user",
        "display_name": "Example User",
        "followers_count": 1500,
        "following_count": 300,
        "user_description": "Bio text",
        "user_blue_verified": true,
        "profile_image_url": "https://pbs.twimg.com/..."
      },
      "tweet": {
        "id": "123456",
        "like_count": 42,
        "retweet_count": 10,
        "reply_count": 5,
        "quote_count": 2,
        "view_count": 5000,
        "bookmark_count": 3,
        "hashtags": ["#AI", "#bittensor"],
        "language": "en",
        "is_reply": false,
        "is_quote": false,
        "conversation_id": "123456"
      }
    }
  ]
}

curl Examples

1. Keyword Search on X

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "source": "X",
    "keywords": ["bittensor"],
    "start_date": "2026-01-01",
    "limit": 10
  }'

2. Fetch Tweets from a Specific User

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "source": "X",
    "usernames": ["@MacrocosmosAI"],
    "start_date": "2026-01-01",
    "limit": 10
  }'

3. Multi-Keyword AND Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "source": "X",
    "keywords": ["chutes", "bittensor"],
    "keyword_mode": "all",
    "start_date": "2026-01-01",
    "limit": 20
  }'

4. Reddit Search

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "source": "REDDIT",
    "keywords": ["r/MachineLearning", "transformers"],
    "start_date": "2026-02-01",
    "limit": 50
  }'

5. User + Keyword Filter

curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "source": "X",
    "usernames": ["@opentensor"],
    "keywords": ["subnet"],
    "start_date": "2026-01-01",
    "limit": 20
  }'

Python Examples

Using the macrocosmos SDK

import asyncio
import macrocosmos as mc

async def search_tweets():
    client = mc.AsyncSn13Client(api_key="YOUR_API_KEY")

    response = await client.sn13.OnDemandData(
        source="X",
        keywords=["bittensor"],
        usernames=[],
        start_date="2026-01-01",
        end_date=None,
        limit=10,
        keyword_mode="any",
    )

    if hasattr(response, "model_dump"):
        data = response.model_dump()

    for tweet in data["data"]:
        print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")
        print(f"  Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}")

asyncio.run(search_tweets())

Using requests (REST)

import requests

url = "https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
}
payload = {
    "source": "X",
    "keywords": ["bittensor"],
    "start_date": "2026-01-01",
    "limit": 10
}

response = requests.post(url, json=payload, headers=headers)
data = response.json()

for tweet in data["data"]:
    print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")

Tips & Known Behaviors

What works reliably

  • High-volume keyword searches: Popular terms like "bittensor", "AI", "iran", "lfg" return fast
  • Wider date ranges: Setting start_date further back (e.g., weeks/months) improves results
  • keyword_mode: "all": Great for finding intersection of two topics (e.g., "chutes" AND "bittensor")

What can be flaky

  • Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding start_date far back helps
  • Niche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed
  • No start_date: Defaults to last 24h which can miss data; set explicitly for best results

Best practices for LLM agents

  1. Always set start_date — don't rely on the 24h default. Use at least 7 days back for user queries
  2. Prefer keywords over usernames — keyword searches are more reliable
  3. For username queries, always include start_date set weeks/months back
  4. Use keyword_mode: "all" when combining a topic with a subtopic (e.g., "bittensor" + "chutes")
  5. Handle timeouts gracefully — if a query times out, retry with broader date range or switch to keyword search
  6. Parse engagement metricsview_count, like_count, retweet_count help rank relevance
  7. Check is_reply and is_quote — filter for original tweets vs replies depending on use case

Gravity API (Large-Scale Collection)

For datasets larger than 1000 results, use the Gravity endpoints:

Create Task

POST /gravity.v1.GravityService/CreateGravityTask
{
  "gravity_tasks": [
    {"platform": "x", "topic": "#bittensor", "keyword": "dTAO"}
  ],
  "name": "Bittensor dTAO Collection"
}

Note: X topics MUST start with # or $. Reddit topics use subreddit format.

Check Status

POST /gravity.v1.GravityService/GetGravityTasks
{
  "gravity_task_id": "multicrawler-xxxx-xxxx",
  "include_crawlers": true
}

Build Dataset

POST /gravity.v1.GravityService/BuildDataset
{
  "crawler_id": "crawler-0-multicrawler-xxxx",
  "max_rows": 10000
}

Warning: Building stops the crawler permanently.

Get Dataset Download

POST /gravity.v1.GravityService/GetDataset
{
  "dataset_id": "dataset-xxxx-xxxx"
}

Returns Parquet file download URLs when complete.


Workflow Summary

Quick Query (< 1000 results):
  OnDemandData → instant results

Large Collection (7-day crawl):
  CreateGravityTask → GetGravityTasks (monitor) → BuildDataset → GetDataset (download)

Error Reference

ErrorCauseFix
401 UnauthorizedMissing or invalid API keyCheck Authorization: Bearer header
500 Internal Server ErrorServer-side issue (often auth via gRPC)Verify API key, retry
DEADLINE_EXCEEDEDQuery timeout — miners can't fulfill requestUse broader date range, switch to keyword search
Empty data arrayNo matching resultsBroaden search terms or date range

Comments

Loading comments...