Google BigQuery

v1.0.0

Access Google BigQuery to run SQL queries, manage datasets and tables, and perform large-scale data analysis with OAuth authentication via the Maton API.

0· 615·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (Google BigQuery via managed OAuth) aligns with what the SKILL.md instructs: calls to gateway.maton.ai and Maton control endpoints to run BigQuery API requests. Requesting a MATON_API_KEY is consistent with a managed-OAuth gateway design.
Instruction Scope
Instructions only show network calls to Maton endpoints (gateway.maton.ai, ctrl.maton.ai, connect.maton.ai) and using the MATON_API_KEY environment variable. They do not instruct the agent to read unrelated files or other environment variables. Note: the instructions require you to open an OAuth URL in a browser to grant Google access via Maton; that means Maton will broker OAuth tokens on your behalf.
Install Mechanism
This is an instruction-only skill with no install spec and no code files to write to disk, which is the lowest-risk install model.
Credentials
Only MATON_API_KEY is required, which is proportionate to using a Maton gateway. However, providing MATON_API_KEY delegates access to Maton — Maton will act as the intermediary with Google and may hold OAuth tokens for your Google resources.
Persistence & Privilege
The skill does not request always:true and has no install-time persistence. It is user-invocable and may be invoked autonomously (platform default), which is expected for skills.
Assessment
This skill proxies BigQuery requests through Maton and requires a MATON_API_KEY — you are giving Maton the ability to act on your behalf with Google via OAuth. Before installing, verify you trust maton.ai (review their site, privacy/terms, and security posture). If you prefer not to route BigQuery traffic through a third party, use a skill that accepts Google credentials/service account keys directly. Keep your MATON_API_KEY secret, rotate it if compromised, and review active Maton connections in your Maton control panel (the SKILL.md points to ctrl.maton.ai/connect URLs). Also note the registry entry lacks a public homepage/source; if provenance is important, ask the publisher for more information before use.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
EnvMATON_API_KEY
latestvk975w676bha2jtewmkg5hjzfdh815hgc
615downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Google BigQuery

Access the Google BigQuery API with managed OAuth authentication. Run SQL queries, manage datasets and tables, and analyze data at scale.

Quick Start

# Run a simple query
python <<'EOF'
import urllib.request, os, json
data = json.dumps({'query': 'SELECT 1 as test_value', 'useLegacySql': False}).encode()
req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/{projectId}/queries', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Base URL

https://gateway.maton.ai/google-bigquery/bigquery/v2/{resource-path}

Replace {resource-path} with the actual BigQuery API endpoint path. The gateway proxies requests to bigquery.googleapis.com and automatically injects your OAuth token.

Authentication

All requests require the Maton API key in the Authorization header:

Authorization: Bearer $MATON_API_KEY

Environment Variable: Set your API key as MATON_API_KEY:

export MATON_API_KEY="YOUR_API_KEY"

Getting Your API Key

  1. Sign in or create an account at maton.ai
  2. Go to maton.ai/settings
  3. Copy your API key

Connection Management

Manage your Google BigQuery OAuth connections at https://ctrl.maton.ai.

List Connections

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections?app=google-bigquery&status=ACTIVE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Create Connection

python <<'EOF'
import urllib.request, os, json
data = json.dumps({'app': 'google-bigquery'}).encode()
req = urllib.request.Request('https://ctrl.maton.ai/connections', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Get Connection

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Response:

{
  "connection": {
    "connection_id": "c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1",
    "status": "ACTIVE",
    "creation_time": "2026-02-14T09:02:02.780520Z",
    "last_updated_time": "2026-02-14T09:02:19.977436Z",
    "url": "https://connect.maton.ai/?session_token=...",
    "app": "google-bigquery",
    "metadata": {}
  }
}

Open the returned url in a browser to complete OAuth authorization.

Delete Connection

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}', method='DELETE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Specifying Connection

If you have multiple Google BigQuery connections, specify which one to use with the Maton-Connection header:

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Maton-Connection', 'c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

If omitted, the gateway uses the default (oldest) active connection.

API Reference

Projects

List Projects

List all projects accessible to the authenticated user.

GET /google-bigquery/bigquery/v2/projects

Response:

{
  "kind": "bigquery#projectList",
  "projects": [
    {
      "id": "my-project-123",
      "numericId": "822245862053",
      "projectReference": {
        "projectId": "my-project-123"
      },
      "friendlyName": "My Project"
    }
  ],
  "totalItems": 1
}

Datasets

List Datasets

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • all - Include hidden datasets if true

Get Dataset

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}

Create Dataset

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets
Content-Type: application/json

{
  "datasetReference": {
    "datasetId": "my_dataset",
    "projectId": "{projectId}"
  },
  "description": "My dataset description",
  "location": "US"
}

Response:

{
  "kind": "bigquery#dataset",
  "id": "my-project:my_dataset",
  "datasetReference": {
    "datasetId": "my_dataset",
    "projectId": "my-project"
  },
  "location": "US",
  "creationTime": "1771059780773"
}

Update Dataset (PATCH)

PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}
Content-Type: application/json

{
  "description": "Updated description"
}

Delete Dataset

DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}

Query Parameters:

  • deleteContents - If true, delete all tables in the dataset (default: false)

Tables

List Tables

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination

Get Table

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}

Create Table

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables
Content-Type: application/json

{
  "tableReference": {
    "projectId": "{projectId}",
    "datasetId": "{datasetId}",
    "tableId": "my_table"
  },
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
      {"name": "name", "type": "STRING", "mode": "NULLABLE"},
      {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
    ]
  }
}

Response:

{
  "kind": "bigquery#table",
  "id": "my-project:my_dataset.my_table",
  "tableReference": {
    "projectId": "my-project",
    "datasetId": "my_dataset",
    "tableId": "my_table"
  },
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
      {"name": "name", "type": "STRING", "mode": "NULLABLE"},
      {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
    ]
  },
  "numRows": "0",
  "type": "TABLE"
}

Update Table (PATCH)

PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
Content-Type: application/json

{
  "description": "Updated table description"
}

Delete Table

DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}

Table Data

List Table Data

Retrieve rows from a table.

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/data

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • startIndex - Zero-based index of the starting row

Response:

{
  "kind": "bigquery#tableDataList",
  "totalRows": "100",
  "rows": [
    {
      "f": [
        {"v": "1"},
        {"v": "Alice"},
        {"v": "1.7710597807E9"}
      ]
    }
  ],
  "pageToken": "..."
}

Insert Table Data (Streaming)

Insert rows into a table using streaming insert. Note: Requires BigQuery paid tier.

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/insertAll
Content-Type: application/json

{
  "rows": [
    {"json": {"id": 1, "name": "Alice"}},
    {"json": {"id": 2, "name": "Bob"}}
  ]
}

Jobs and Queries

Run Query (Synchronous)

Execute a SQL query and return results directly.

POST /google-bigquery/bigquery/v2/projects/{projectId}/queries
Content-Type: application/json

{
  "query": "SELECT * FROM `my_dataset.my_table` LIMIT 10",
  "useLegacySql": false,
  "maxResults": 100
}

Response:

{
  "kind": "bigquery#queryResponse",
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER"},
      {"name": "name", "type": "STRING"}
    ]
  },
  "jobReference": {
    "projectId": "my-project",
    "jobId": "job_abc123",
    "location": "US"
  },
  "totalRows": "2",
  "rows": [
    {"f": [{"v": "1"}, {"v": "Alice"}]},
    {"f": [{"v": "2"}, {"v": "Bob"}]}
  ],
  "jobComplete": true,
  "totalBytesProcessed": "1024"
}

Query Parameters:

  • useLegacySql - Use legacy SQL syntax (default: false for GoogleSQL)
  • maxResults - Maximum results per page
  • timeoutMs - Query timeout in milliseconds

Create Job (Asynchronous)

Submit a job for asynchronous execution.

POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs
Content-Type: application/json

{
  "configuration": {
    "query": {
      "query": "SELECT * FROM `my_dataset.my_table`",
      "useLegacySql": false,
      "destinationTable": {
        "projectId": "{projectId}",
        "datasetId": "{datasetId}",
        "tableId": "results_table"
      },
      "writeDisposition": "WRITE_TRUNCATE"
    }
  }
}

List Jobs

GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • stateFilter - Filter by job state: done, pending, running
  • projection - full or minimal

Response:

{
  "kind": "bigquery#jobList",
  "jobs": [
    {
      "id": "my-project:US.job_abc123",
      "jobReference": {
        "projectId": "my-project",
        "jobId": "job_abc123",
        "location": "US"
      },
      "state": "DONE",
      "statistics": {
        "creationTime": "1771059781456",
        "startTime": "1771059782203",
        "endTime": "1771059782324"
      }
    }
  ]
}

Get Job

GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}

Query Parameters:

  • location - Job location (e.g., "US", "EU")

Get Query Results

Retrieve results from a completed query job.

GET /google-bigquery/bigquery/v2/projects/{projectId}/queries/{jobId}

Query Parameters:

  • location - Job location
  • maxResults - Maximum results per page
  • pageToken - Token for pagination
  • startIndex - Zero-based starting row

Cancel Job

POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}/cancel

Query Parameters:

  • location - Job location

Pagination

BigQuery uses token-based pagination. List responses include a pageToken when more results exist:

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets?maxResults=10&pageToken={token}

Response:

{
  "datasets": [...],
  "nextPageToken": "eyJvZmZzZXQiOjEwfQ=="
}

Use the nextPageToken value as pageToken in subsequent requests.

Code Examples

JavaScript

// Run a query
const response = await fetch(
  'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
  {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.MATON_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      query: 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
      useLegacySql: false
    })
  }
);
const data = await response.json();
console.log(data.rows);

Python

import os
import requests

# Run a query
response = requests.post(
    'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
    headers={'Authorization': f'Bearer {os.environ["MATON_API_KEY"]}'},
    json={
        'query': 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
        'useLegacySql': False
    }
)
data = response.json()
for row in data.get('rows', []):
    print([field['v'] for field in row['f']])

Schema Field Types

Common BigQuery data types for table schemas:

TypeDescription
STRINGVariable-length character data
INTEGER64-bit signed integer
FLOAT64-bit IEEE floating point
BOOLEANTrue or false
TIMESTAMPAbsolute point in time
DATECalendar date
TIMETime of day
DATETIMEDate and time
BYTESVariable-length binary data
NUMERICExact numeric value with 38 digits of precision
BIGNUMERICExact numeric value with 76+ digits of precision
GEOGRAPHYGeographic data
JSONJSON data
RECORDNested fields (also called STRUCT)

Field Modes:

  • NULLABLE - Field can be null (default)
  • REQUIRED - Field cannot be null
  • REPEATED - Field is an array

Notes

  • Project IDs are typically in the format project-name or project-name-12345
  • Dataset IDs follow naming rules: letters, numbers, underscores (max 1024 characters)
  • Table IDs follow same naming rules as datasets
  • Job IDs are generated by BigQuery and include location prefix
  • Query results use f (fields) and v (value) structure
  • Streaming inserts require BigQuery paid tier (not available in free tier)
  • Use useLegacySql: false for GoogleSQL (standard SQL) syntax
  • IMPORTANT: When using curl commands, use curl -g when URLs contain brackets to disable glob parsing
  • IMPORTANT: When piping curl output to jq or other commands, environment variables like $MATON_API_KEY may not expand correctly in some shell environments

Error Handling

StatusMeaning
400Missing Google BigQuery connection or invalid request
401Invalid or missing Maton API key
403Access denied (insufficient permissions or quota exceeded)
404Resource not found (project, dataset, table, or job)
409Resource already exists
429Rate limited
4xx/5xxPassthrough error from BigQuery API

Troubleshooting: API Key Issues

  1. Check that the MATON_API_KEY environment variable is set:
echo $MATON_API_KEY
  1. Verify the API key is valid by listing connections:
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Troubleshooting: Invalid App Name

  1. Ensure your URL path starts with google-bigquery. For example:
  • Correct: https://gateway.maton.ai/google-bigquery/bigquery/v2/projects
  • Incorrect: https://gateway.maton.ai/bigquery/v2/projects

Resources

Comments

Loading comments...