Ai Product Description Generator Free
v2.0.1Generate product descriptions using free AI backends: Ollama (local, offline) or HuggingFace Inference API (free online). Use when creating e-commerce copy w...
⭐ 0· 32·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the actual behavior: script.sh generates product descriptions using either a local Ollama server or the HuggingFace Inference API. No unrelated services or secrets are requested.
Instruction Scope
Runtime instructions and the script stay within scope (take product + features, build a prompt, call Ollama or HuggingFace). Minor doc inconsistencies: the top env block lists only HF_TOKEN, but the docs later mention OLLAMA_HOST and OLLAMA_MODEL which the script does use.
Install Mechanism
There is no install spec and no downloads; the script is instruction-only with a bundled shell script. It requires Python and the requests library (the README/Requirements text inconsistently states 'standard library only' but the script does call requests and documents 'pip install requests').
Credentials
Only optional credentials are used: HF_TOKEN (optional) for HuggingFace and OLLAMA_HOST / OLLAMA_MODEL for local Ollama. This is proportionate to the stated backends. The SKILL.md's top env block omits OLLAMA_* entries even though the script reads them — a documentation mismatch.
Persistence & Privilege
The skill does not request persistent/always-on privileges, does not modify other skills or system-wide configs, and does not store tokens itself.
Assessment
This skill appears to do what it says: generate product descriptions using local Ollama or HuggingFace. Before installing/running, note two small issues: the script requires the Python 'requests' package even though the Requirements section says 'standard library only' (run: pip install requests), and the docs omit OLLAMA_HOST/OLLAMA_MODEL from the top env block even though the script uses them. If you plan to use the Ollama backend, ensure an Ollama server is running locally (default http://localhost:11434) and you understand that a local inference server will accept prompts on that port. HF_TOKEN is optional (for higher rate limits); the skill does not request any other credentials. As always, inspect scripts you run on your machine and only run them from sources you trust.Like a lobster shell, security has layers — review code before you run it.
latestvk973rcb4nj4gqfy71zk57bkwgh8525np
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
Environment variables
HF_TOKENoptional— HuggingFace API token (optional, free at huggingface.co)