Aliyun Qwen Deep Research
v1.0.0Use when a task needs Alibaba Cloud Model Studio Qwen Deep Research models to plan multi-step investigation, run iterative web research, and produce structur...
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's name, description, script, and docs all align with running Alibaba Cloud Qwen Deep Research workflows. However, the registry metadata lists no required environment variables or primary credential while the SKILL.md explicitly requires DASHSCOPE_API_KEY (or adding dashscope_api_key to ~/.alibabacloud/credentials) — an omission in the declared requirements.
Instruction Scope
The SKILL.md and included script are narrowly scoped: they prepare a JSON request payload, save outputs under the skill's output directory, and document expected model strings and streaming behavior. There are no instructions to read unrelated system files, exfiltrate data, or contact endpoints outside of the expected Alibaba SDK usage. The only notable runtime actions are installing/using the dashscope SDK and requiring an API key.
Install Mechanism
There is no formal install spec in the registry (skill is instruction-only). The README instructs creating a venv and running pip install dashscope. Installing a package from PyPI (or another pip source) is a moderate-risk operation because it pulls remote code — confirm the dashscope package's provenance and trustworthiness before running.
Credentials
The skill needs an Alibaba/DASHSCOPE API key (DASHSCOPE_API_KEY or dashscope_api_key in ~/.alibabacloud/credentials) according to SKILL.md, but the published metadata lists no required env vars or primary credential. This mismatch is concerning: a credential is necessary for the skill to do its work, but it was not declared. Ensure you only provide a least-privilege API key and understand where credentials will be read/stored.
Persistence & Privilege
The skill does not request persistent, always-on privileges (always:false) and does not attempt to modify other skills or system-wide agent settings. It writes outputs under its own output directory, which is expected behavior.
What to consider before installing
Before installing or running this skill:
- Be aware the SKILL.md requires an Alibaba/DASHSCOPE API key, but the registry metadata did not declare any required credentials — treat that as an omission and do not supply broad credentials blindly.
- Review and verify the dashscope Python package (source, maintainers, download URL) before pip installing; if possible, install it in an isolated virtualenv and inspect its code or use a vetted mirror.
- Provide a least-privilege API key (scoped to only the model access needed), and prefer adding it to a dedicated credentials file rather than exposing it system-wide.
- Test the skill in a sandboxed environment first and inspect the generated request.json in output/aliyun-qwen-deep-research/requests/ to confirm it contains only expected data.
- If you need higher assurance, ask the publisher for the authoritative homepage/source, or request that required env vars be declared in the registry metadata so the credential requirement is explicit.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Category: provider
Model Studio Qwen Deep Research
Validation
mkdir -p output/aliyun-qwen-deep-research
python -m py_compile skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py && echo "py_compile_ok" > output/aliyun-qwen-deep-research/validate.txt
Pass criteria: command exits 0 and output/aliyun-qwen-deep-research/validate.txt is generated.
Output And Evidence
- Save research goals, confirmation answers, normalized request payloads, and final report snapshots under
output/aliyun-qwen-deep-research/. - Keep the exact model, region, and
enable_feedbacksetting with each saved run.
Use this skill when the user wants a deep, multi-stage research workflow rather than a single chat completion.
Critical model names
Use one of these exact model strings:
qwen-deep-researchqwen-deep-research-2025-12-15
Selection guidance:
- Use
qwen-deep-researchfor the current mainline model. - Use
qwen-deep-research-2025-12-15when you need the snapshot with MCP tool-calling support and stronger reproducibility.
Prerequisites
- Install SDK in a virtual environment:
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
- Set
DASHSCOPE_API_KEYin your environment, or adddashscope_api_keyto~/.alibabacloud/credentials. - This model currently applies to the China mainland (Beijing) region and uses its own API shape rather than OpenAI-compatible mode.
Normalized interface (research.run)
Request
topic(string, required)model(string, optional): defaultqwen-deep-researchmessages(array<object>, optional)enable_feedback(bool, optional): defaulttruestream(bool, optional): must betrueattachments(array<object>, optional): image URLs and related context
Response
status(string): stage status such asthinking,researching, orfinishedtext(string, optional): streamed content chunkreport(string, optional): final structured research reportraw(object, optional)
Quick start
python skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py \
--topic "Compare cloud video generation model trade-offs for marketing automation." \
--disable-feedback
Operational guidance
- Expect streaming output only.
- Keep the initial topic concrete and bounded; broad topics can trigger long iterative search plans.
- If the model asks follow-up questions and you already know the constraints, answer them explicitly to avoid wasted rounds.
- Use the snapshot model when you need stable evaluation runs or MCP tool-calling support.
Output location
- Default output:
output/aliyun-qwen-deep-research/requests/ - Override base dir with
OUTPUT_DIR.
References
references/sources.md
Files
4 totalSelect a file
Select a file to preview.
Comments
Loading comments…
