Install
openclaw skills install google-bigqueryGoogle BigQuery API integration with managed OAuth. Run SQL queries, manage datasets and tables, and analyze data at scale. Use this skill when users want to query BigQuery data, create or manage datasets/tables, run analytics jobs, or work with BigQuery resources. For other third party apps, use the api-gateway skill (https://clawhub.ai/byungkyu/api-gateway).
openclaw skills install google-bigqueryAccess the Google BigQuery API with managed OAuth authentication. Run SQL queries, manage datasets and tables, and analyze data at scale.
# Run a simple query
python <<'EOF'
import urllib.request, os, json
data = json.dumps({'query': 'SELECT 1 as test_value', 'useLegacySql': False}).encode()
req = urllib.request.Request('https://api.maton.ai/google-bigquery/bigquery/v2/projects/{projectId}/queries', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
https://api.maton.ai/google-bigquery/bigquery/v2/{resource-path}
Maton proxies requests to bigquery.googleapis.com and automatically injects your OAuth token.
All requests require the Maton API key in the Authorization header:
Authorization: Bearer $MATON_API_KEY
Environment Variable: Set your API key as MATON_API_KEY:
export MATON_API_KEY="YOUR_API_KEY"
Manage your Google BigQuery OAuth connections at https://api.maton.ai.
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://api.maton.ai/connections?app=google-bigquery&status=ACTIVE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
python <<'EOF'
import urllib.request, os, json
data = json.dumps({'app': 'google-bigquery'}).encode()
req = urllib.request.Request('https://api.maton.ai/connections', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://api.maton.ai/connections/{connection_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
Response:
{
"connection": {
"connection_id": "{connection_id}",
"status": "ACTIVE",
"creation_time": "2026-02-14T09:02:02.780520Z",
"last_updated_time": "2026-02-14T09:02:19.977436Z",
"url": "https://connect.maton.ai/?session_token=...",
"app": "google-bigquery",
"metadata": {}
}
}
Open the returned url in a browser to complete OAuth authorization.
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://api.maton.ai/connections/{connection_id}', method='DELETE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
If you have multiple Google BigQuery connections, specify which one to use with the Maton-Connection header:
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://api.maton.ai/google-bigquery/bigquery/v2/projects')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Maton-Connection', '{connection_id}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
If you have multiple connections, always include this header to ensure requests go to the intended account.
List all projects accessible to the authenticated user.
GET /google-bigquery/bigquery/v2/projects
Response:
{
"kind": "bigquery#projectList",
"projects": [
{
"id": "my-project-123",
"numericId": "822245862053",
"projectReference": {
"projectId": "my-project-123"
},
"friendlyName": "My Project"
}
],
"totalItems": 1
}
GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets
Query Parameters:
maxResults - Maximum number of results to returnpageToken - Token for paginationall - Include hidden datasets if trueGET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}
POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets
Content-Type: application/json
{
"datasetReference": {
"datasetId": "my_dataset",
"projectId": "{projectId}"
},
"description": "My dataset description",
"location": "US"
}
Response:
{
"kind": "bigquery#dataset",
"id": "my-project:my_dataset",
"datasetReference": {
"datasetId": "my_dataset",
"projectId": "my-project"
},
"location": "US",
"creationTime": "1771059780773"
}
PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}
Content-Type: application/json
{
"description": "Updated description"
}
DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}
Query Parameters:
deleteContents - If true, delete all tables in the dataset (default: false)GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables
Query Parameters:
maxResults - Maximum number of results to returnpageToken - Token for paginationGET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables
Content-Type: application/json
{
"tableReference": {
"projectId": "{projectId}",
"datasetId": "{datasetId}",
"tableId": "my_table"
},
"schema": {
"fields": [
{"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
]
}
}
Response:
{
"kind": "bigquery#table",
"id": "my-project:my_dataset.my_table",
"tableReference": {
"projectId": "my-project",
"datasetId": "my_dataset",
"tableId": "my_table"
},
"schema": {
"fields": [
{"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
]
},
"numRows": "0",
"type": "TABLE"
}
PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
Content-Type: application/json
{
"description": "Updated table description"
}
DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
Retrieve rows from a table.
GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/data
Query Parameters:
maxResults - Maximum number of results to returnpageToken - Token for paginationstartIndex - Zero-based index of the starting rowResponse:
{
"kind": "bigquery#tableDataList",
"totalRows": "100",
"rows": [
{
"f": [
{"v": "1"},
{"v": "Alice"},
{"v": "1.7710597807E9"}
]
}
],
"pageToken": "..."
}
Insert rows into a table using streaming insert. Note: Requires BigQuery paid tier.
POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/insertAll
Content-Type: application/json
{
"rows": [
{"json": {"id": 1, "name": "Alice"}},
{"json": {"id": 2, "name": "Bob"}}
]
}
Execute a SQL query and return results directly.
POST /google-bigquery/bigquery/v2/projects/{projectId}/queries
Content-Type: application/json
{
"query": "SELECT * FROM `my_dataset.my_table` LIMIT 10",
"useLegacySql": false,
"maxResults": 100
}
Response:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{"name": "id", "type": "INTEGER"},
{"name": "name", "type": "STRING"}
]
},
"jobReference": {
"projectId": "my-project",
"jobId": "job_abc123",
"location": "US"
},
"totalRows": "2",
"rows": [
{"f": [{"v": "1"}, {"v": "Alice"}]},
{"f": [{"v": "2"}, {"v": "Bob"}]}
],
"jobComplete": true,
"totalBytesProcessed": "1024"
}
Query Parameters:
useLegacySql - Use legacy SQL syntax (default: false for GoogleSQL)maxResults - Maximum results per pagetimeoutMs - Query timeout in millisecondsSubmit a job for asynchronous execution.
POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs
Content-Type: application/json
{
"configuration": {
"query": {
"query": "SELECT * FROM `my_dataset.my_table`",
"useLegacySql": false,
"destinationTable": {
"projectId": "{projectId}",
"datasetId": "{datasetId}",
"tableId": "results_table"
},
"writeDisposition": "WRITE_TRUNCATE"
}
}
}
GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs
Query Parameters:
maxResults - Maximum number of results to returnpageToken - Token for paginationstateFilter - Filter by job state: done, pending, runningprojection - full or minimalResponse:
{
"kind": "bigquery#jobList",
"jobs": [
{
"id": "my-project:US.job_abc123",
"jobReference": {
"projectId": "my-project",
"jobId": "job_abc123",
"location": "US"
},
"state": "DONE",
"statistics": {
"creationTime": "1771059781456",
"startTime": "1771059782203",
"endTime": "1771059782324"
}
}
]
}
GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}
Query Parameters:
location - Job location (e.g., "US", "EU")Retrieve results from a completed query job.
GET /google-bigquery/bigquery/v2/projects/{projectId}/queries/{jobId}
Query Parameters:
location - Job locationmaxResults - Maximum results per pagepageToken - Token for paginationstartIndex - Zero-based starting rowPOST /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}/cancel
Query Parameters:
location - Job locationBigQuery uses token-based pagination. List responses include a pageToken when more results exist:
GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets?maxResults=10&pageToken={token}
Response:
{
"datasets": [...],
"nextPageToken": "eyJvZmZzZXQiOjEwfQ=="
}
Use the nextPageToken value as pageToken in subsequent requests.
// Run a query
const response = await fetch(
'https://api.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.MATON_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
query: 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
useLegacySql: false
})
}
);
const data = await response.json();
console.log(data.rows);
import os
import requests
# Run a query
response = requests.post(
'https://api.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
headers={'Authorization': f'Bearer {os.environ["MATON_API_KEY"]}'},
json={
'query': 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
'useLegacySql': False
}
)
data = response.json()
for row in data.get('rows', []):
print([field['v'] for field in row['f']])
Common BigQuery data types for table schemas:
| Type | Description |
|---|---|
STRING | Variable-length character data |
INTEGER | 64-bit signed integer |
FLOAT | 64-bit IEEE floating point |
BOOLEAN | True or false |
TIMESTAMP | Absolute point in time |
DATE | Calendar date |
TIME | Time of day |
DATETIME | Date and time |
BYTES | Variable-length binary data |
NUMERIC | Exact numeric value with 38 digits of precision |
BIGNUMERIC | Exact numeric value with 76+ digits of precision |
GEOGRAPHY | Geographic data |
JSON | JSON data |
RECORD | Nested fields (also called STRUCT) |
Field Modes:
NULLABLE - Field can be null (default)REQUIRED - Field cannot be nullREPEATED - Field is an arrayproject-name or project-name-12345f (fields) and v (value) structureuseLegacySql: false for GoogleSQL (standard SQL) syntaxcurl -g when URLs contain brackets to disable glob parsingjq or other commands, environment variables like $MATON_API_KEY may not expand correctly in some shell environments| Status | Meaning |
|---|---|
| 400 | Missing Google BigQuery connection or invalid request |
| 401 | Invalid or missing Maton API key |
| 403 | Access denied (insufficient permissions or quota exceeded) |
| 404 | Resource not found (project, dataset, table, or job) |
| 409 | Resource already exists |
| 429 | Rate limited |
| 4xx/5xx | Passthrough error from BigQuery API |
MATON_API_KEY environment variable is set:echo $MATON_API_KEY
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://api.maton.ai/connections')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF
google-bigquery. For example:https://api.maton.ai/google-bigquery/bigquery/v2/projectshttps://api.maton.ai/bigquery/v2/projects