Make and Model Recognition

v1.0.1

Detect the largest vehicle from an image using TrafficEye car-box detection, run make and model recognition for that vehicle, and return all license plates a...

0· 153·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for eyedea-ai/make-and-model-recognition.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Make and Model Recognition" (eyedea-ai/make-and-model-recognition) from ClawHub.
Skill page: https://clawhub.ai/eyedea-ai/make-and-model-recognition
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: TRAFFICEYE_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install make-and-model-recognition

ClawHub CLI

Package manager switcher

npx clawhub@latest install make-and-model-recognition
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (make & model + plates from an image) match the code and SKILL.md. The only required runtime items are a Python interpreter and a TrafficEye API key, which are proportionate for a client that uploads images to an external recognition API.
Instruction Scope
Runtime instructions and the script focus on validating a local image path, reading that image, uploading it to the configured TrafficEye endpoint, and parsing the JSON response to pick the largest boxed road user. The instructions do not request access to unrelated files, credentials, or external endpoints beyond the API host (api URL is configurable).
Install Mechanism
No install spec; this is essentially an instruction+script skill that runs with the system Python and standard library. Nothing is downloaded or written to disk by an installer.
Credentials
Only TRAFFICEYE_API_KEY is required; other TRAFFICEYE_* variables are optional overrides (API URL, auth mode, field names, timeout). No unrelated credentials or system config paths are requested.
Persistence & Privilege
Skill is not always-enabled, does not modify other skills or global agent configuration, and does not request elevated or persistent system privileges.
Assessment
This skill will upload any image you give it to the TrafficEye service and requires your TRAFFICEYE_API_KEY. Before installing, confirm you trust trafficeye.ai for processing and storage of images (including license plates), check their data-retention/privacy policy, and avoid supplying sensitive images unless you are comfortable with that external processing. Keep your API key secret (do not paste it into chat). You can test behavior offline using the provided sample_response.json to validate selection logic without contacting the API. Autonomous invocation is permitted by default (normal for skills), so only enable the skill for agents you trust to run it.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

OSLinux · macOS · Windows
Any binpython3, python
EnvTRAFFICEYE_API_KEY
Primary envTRAFFICEYE_API_KEY
latestvk976cdq6hza42y82ntnm8g5ssh83ewqy
153downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0
Linux, macOS, Windows

TrafficEye Largest Road User Reader

Use this skill when the user wants the largest detected vehicle from an image, along with its make and model classification and every detected license plate belonging to that same road user.

What This Skill Does

  1. Accepts a local image path.
  2. Uploads the image to the TrafficEye recognition API.
  3. Sends a recognition request that asks for detection, OCR, and MMR with box preference by default.
  4. Parses the API response, including responses wrapped as { "status": ..., "data": ... }.
  5. Picks the largest detected road user by box.position area.
  6. Returns a wrapper object containing roadUser, box, plates, area, and source, preserving the full selected road-user payload.

Expected Input

  • A local image file path.
  • If the user supplied an attachment instead of a path, first resolve it to a local file path and then run the helper.

Default Runtime Assumptions

  • The API endpoint defaults to https://trafficeye.ai/recognition.
  • The default request payload is {"tasks":["DETECTION","OCR","MMR"],"requestedDetectionTypes":["BOX","PLATE"],"mmrPreference":"BOX"}.
  • The default API-key transport matches the TrafficEye public API example: header mode with header name apikey.
  • Auth and request fields remain configurable in case your deployment differs.

Environment Variables

  • TRAFFICEYE_API_KEY: required unless passed explicitly to the helper.
  • TRAFFICEYE_API_URL: optional, defaults to https://trafficeye.ai/recognition.
  • TRAFFICEYE_API_KEY_MODE: one of header, bearer, form, query. Default: header.
  • TRAFFICEYE_API_KEY_NAME: key name for header, form, or query mode. Default: apikey.
  • TRAFFICEYE_FILE_FIELD: multipart field for the image. Default: file.
  • TRAFFICEYE_REQUEST_FIELD: multipart field for the JSON request. Default: request.
  • TRAFFICEYE_REQUEST_JSON: JSON string to include as the request field. By default this is {"tasks":["DETECTION","OCR","MMR"],"requestedDetectionTypes":["BOX","PLATE"],"mmrPreference":"BOX"}.
  • TRAFFICEYE_TIMEOUT_S: optional timeout in seconds. Default: 30.

Only TRAFFICEYE_API_KEY is required for the default live API flow. The other variables are optional overrides.

How To Run

Setup your API key:

export TRAFFICEYE_API_KEY='YOUR_REAL_KEY'

Use the road-user helper:

python3 recognize_road_user.py /absolute/path/to/image.jpg

For structured output:

python3 recognize_road_user.py /absolute/path/to/image.jpg --format json

If the deployment expects Bearer auth:

TRAFFICEYE_API_KEY_MODE=bearer python3 recognize_road_user.py /absolute/path/to/image.jpg

If the deployment needs an explicit request payload:

TRAFFICEYE_REQUEST_JSON='{"tasks":["DETECTION","OCR","MMR"],"requestedDetectionTypes":["BOX","PLATE"],"mmrPreference":"BOX"}' python3 recognize_road_user.py /absolute/path/to/image.jpg --format json

Equivalent to the documented public API example:

curl -X POST \
  -H "Content-Type: multipart/form-data" \
  -H "apikey: YOUR_API_KEY_HERE" \
  -F "file=@image.jpg" \
  -F 'request={"tasks":["DETECTION","OCR","MMR"],"requestedDetectionTypes":["BOX","PLATE"],"mmrPreference":"BOX"}' \
  https://trafficeye.ai/recognition

Agent Workflow

  1. Verify that the image path exists.
  2. Run python3 recognize_road_user.py <image-path> --format json.
  3. Present the full selected road-user payload to the user, especially box, mmr, and the complete plates array.
  4. If the selected road user has no plates, explain that the largest vehicle was found but no plates were attached to that road user.
  5. If authentication fails, ask the user which auth mode their deployment expects and retry with the matching environment variables.

Offline Validation

You can validate the selection logic without calling the API:

python3 recognize_road_user.py --response-json-file examples/sample_response.json --format json

Output Shape

The helper prints JSON with this top-level structure:

{
  "roadUser": {"box": {}, "plates": [], "mmr": {}},
  "box": {},
  "plates": [],
  "area": 0,
  "source": {
    "combinationIndex": 0,
    "roadUserIndex": 0,
    "path": "combinations[0].roadUsers[0]"
  }
}
  • roadUser is the original selected road-user payload from TrafficEye.
  • box repeats roadUser.box for convenience.
  • plates repeats roadUser.plates for convenience and may be empty.
  • area is the computed rectangle area used for winner selection.
  • source identifies where the selected road user came from in the API response.

Notes

  • The helper intentionally chooses the largest boxed vehicle by geometric area, not by detection confidence.
  • The response parser first checks data.combinations[].roadUsers[], then combinations[].roadUsers[], then roadUsers[], and finally nested road-user payloads discovered recursively.
  • The default request and auth header mirror the public example at https://www.trafficeye.ai/api.
  • The selected result now includes the original road-user payload from the API so mmr, box, all plates, and their scores are preserved.

Comments

Loading comments...