net-deep-research

v1.0.0

Perform deep multi-source internet research before answering. Use when the user prefixes a request with /net, asks for the latest information, wants real-tim...

0· 45·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for h4444433333/net-deep-research.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "net-deep-research" (h4444433333/net-deep-research) from ClawHub.
Skill page: https://clawhub.ai/h4444433333/net-deep-research
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install net-deep-research

ClawHub CLI

Package manager switcher

npx clawhub@latest install net-deep-research
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description ask for live, multi-source research and the skill declares no binaries, installs, or extra credentials — which matches expectations for an instruction-only web-research helper.
Instruction Scope
SKILL.md gives a detailed research workflow (query planning, source prioritization, evidence extraction). This necessarily grants the agent broad discretion to fetch and synthesize web content, but the instructions do not request unrelated system files or credentials. The guidance is prescriptive about source types and fallback behavior but does not enumerate allowed domains or fetching mechanisms, which may lead to variable runtime behavior depending on the agent's browsing capability.
Install Mechanism
No install spec or code files are present; instruction-only skills have minimal on-disk risk.
Credentials
The skill requests no environment variables, credentials, or config paths — proportionate for a web-research instruction set.
Persistence & Privilege
The skill is not marked always:true and uses default invocation controls. It does not request permanent presence or modifications to other skills or system-wide settings.
Assessment
This skill appears coherent and lightweight: it only contains runtime instructions for doing multi-source web research and requests no credentials or installs. Before enabling it, consider: (1) the agent will be allowed to fetch and summarize live web content — avoid sending sensitive secrets in /net queries; (2) the SKILL.md does not restrict which domains to use, so review logs or run a few test queries to confirm the sources the agent prefers; (3) keep it user-invocable (not always:on) so it only runs when you explicitly request research. If you want tighter control, require a whitelist of allowed sites or review evidence links produced by the agent.

Like a lobster shell, security has layers — review code before you run it.

latestvk976j360qt29t5rpr1wasc9bkd85pbt4
45downloads
0stars
1versions
Updated 12h ago
v1.0.0
MIT-0

Net Deep Research

When this skill is triggered, do not answer immediately.

Your job is to turn the user's request into a controlled research workflow:

  1. classify the question,
  2. generate complementary search queries,
  3. prefer stable public sources,
  4. extract evidence for concrete claims,
  5. resolve or expose conflicts,
  6. answer from an internal evidence map.

Trigger Handling

If the user message starts with /net:

  • remove the /net prefix
  • trim whitespace
  • treat the remainder as the actual research question

Then restate the question in one sentence before researching.

Goal

Produce answers that are:

  • current
  • evidence-based
  • multi-source
  • explicit about uncertainty
  • grounded in broadly stable public sources

Do not rely on one weak page for an important claim.

Hard Rules

Apply these rules strictly:

  1. For predictive, forward-looking, market, macro, or scenario questions, separate the answer into two layers:
    • Verified Facts
    • Inference
  2. Every core conclusion must be tied to at least one primary source whenever possible.
  3. Secondary media, commentary, or community sources must not be the only support for a key conclusion.
  4. If direct official fetching fails, use a fixed fallback order instead of ad hoc substitution.

Mode Selection

Choose one primary_mode. Add one secondary_mode only if it clearly helps.

Mode A: Current Fact Check

Use for questions about:

  • latest status
  • current availability
  • recent releases
  • whether something is already live

Typical cues:

  • latest
  • now
  • currently
  • as of today
  • recently
  • launched
  • released

Mode B: Capability Or Compatibility Verification

Use for questions about:

  • whether something supports a feature
  • whether two things are compatible
  • supported versions, models, platforms, or plans

Typical cues:

  • support
  • compatible
  • can it
  • does it work with
  • available on

Mode C: Implementation Or How-To Research

Use for questions about:

  • how to build something
  • how to integrate or deploy something
  • best practices
  • architecture or implementation paths

Typical cues:

  • how to
  • implement
  • build
  • integrate
  • deploy
  • best practice

Mode D: Comparison, Selection, Or Policy Confirmation

Use for questions about:

  • which option is better
  • framework or tool selection
  • differences between alternatives
  • policy, institution, or official rules

Typical cues:

  • best
  • compare
  • vs
  • difference
  • choose
  • policy
  • official rule

Classification Rules

Apply these rules in order:

  1. If the question is about how to implement, integrate, deploy, or build, choose Mode C.
  2. If the question is about comparing options, choosing the best option, or checking policy or official rules, choose Mode D.
  3. If the question is about support, compatibility, or whether a feature exists, choose Mode B.
  4. If the question is about the latest or current status of a fact, choose Mode A.

Use a secondary mode only when both are necessary:

  • Mode A + Mode B: current support status
  • Mode B + Mode C: whether possible, then how to implement
  • Mode D + Mode C: choose a solution, then outline implementation

Question Normalization

Before searching, extract:

  • subject
  • target_capability if any
  • time_scope if provided
  • region_scope if provided
  • version_scope if provided

Do not invent missing scopes.

Then rewrite the request as one normalized question.

Claim Extraction

Break the request into at most 3 critical claims.

Examples:

  • whether the capability exists
  • when the capability became available
  • what scope or limitations apply
  • which option is the best fit for the user's goal

Every important conclusion in the final answer should map back to one of these claims.

Query Planning

For each important claim, generate these core query slots:

  • direct_query
  • official_query
  • release_query
  • contradiction_query

Add one mode-specific slot:

  • Mode A -> recent_query
  • Mode B -> compatibility_query
  • Mode C -> implementation_query
  • Mode D -> comparison_query or policy_query

Keep the total query count between 4 and 8 for a normal request.

Source Routing

Use source families, not fixed websites, as the primary routing method.

For predictive, market, macro, or outlook questions:

  • treat official, primary, and directly published data as the evidence base
  • treat secondary reports only as interpretation layers
  • do not let commentary outrank direct data

Mode A Priority

  1. official announcement, changelog, release notes
  2. official docs
  3. official repository releases
  4. high-quality secondary reporting

Mode B Priority

  1. official docs
  2. API reference or SDK docs
  3. official repository, release, or issue
  4. package registry pages

Mode C Priority

  1. official docs
  2. official repository README, examples, guides
  3. package registry pages
  4. stable technical references

Mode D Priority

  1. official docs or official sites
  2. government, institutional, or standards sources when relevant
  3. official repository, pricing, feature, or explanation pages
  4. high-quality secondary analysis

Preferred Source Families

Prefer these source families when relevant:

  • official documentation sites
  • official company or organization sites
  • official changelogs and release notes
  • GitHub repositories and releases
  • package registries such as PyPI and npm
  • standards sites such as RFC, IETF, and W3C
  • government and institutional sites
  • stable technical references such as MDN

Accessibility And Stability Rules

Prefer sources that are:

  • public
  • readable without login
  • likely to remain available
  • broadly reachable for both international and China-based users when possible

Avoid depending on:

  • login-gated content
  • short-form social posts
  • low-signal community threads as the only evidence
  • content farms or SEO spam pages
  • unattributed reposts

If direct official fetching fails, use this fixed fallback order and do not skip steps:

  • official page -> official mirror or official alternate page -> official changelog or release note -> official GitHub or official repository page -> package registry or standards page -> stable technical reference
  • government or institution page -> official FAQ -> official press release -> official transcript or bulletin -> high-quality institutional analysis

Do not jump straight from an unavailable official source to media commentary if stronger fallback layers still exist.

Source Filtering

Reject a source as key evidence if it:

  • requires login for the core content
  • does not clearly support any claim
  • is only a repost without the original source
  • is obviously low quality or SEO-generated

Source Scoring

Score each candidate source across 5 dimensions, each from 0 to 2:

  • authority
  • stability
  • accessibility
  • freshness
  • relevance

Total score range: 0-10

Minimum rules:

  • do not use a source with total score below 4 as key evidence
  • every important claim should have at least one source with both:
    • authority >= 1
    • relevance >= 1
  • every core conclusion should be anchored to at least one primary source whenever possible
  • do not let secondary media be the only support for a key conclusion when a stronger source family is available

Evidence Extraction

For each claim, extract evidence items with:

  • claim id
  • source title
  • source URL
  • source date hint if available
  • evidence snippet
  • source score
  • stance: support, oppose, or partial

Do not over-quote. Extract only the part needed to support the claim.

Conflict Handling

If a claim has both supporting and opposing evidence, explicitly mark it as conflicted.

Only use these conflict causes:

  • version difference
  • timing difference
  • region difference
  • plan tier difference
  • wording ambiguity
  • evidence insufficiency

Do not invent a conflict explanation without support.

Confidence Rules

Assign confidence per key claim:

High

  • at least 2 supporting sources
  • at least 1 strong primary source
  • no major unresolved conflict

Medium

  • at least 1 reasonably strong source
  • some scope limitation or minor conflict

Low

  • only weak support
  • or unresolved conflict
  • or no clear primary source

Evidence Map

Before writing the answer, build this internal structure:

  • question_restatement
  • primary_mode
  • secondary_mode if any
  • claims
  • supporting_sources
  • conflicts
  • uncertainties
  • answer_outline

For predictive, market, macro, or outlook questions, the evidence map must also separate:

  • verified_facts
  • inference

Do not skip this step.

Final Answer Format

Default section order:

  1. Question Restatement
  2. Short Answer
  3. Key Findings
  4. Cross-Source Notes
  5. Uncertainties or Limits
  6. Sources

For predictive, market, macro, or outlook questions, use this stricter order:

  1. Question Restatement
  2. Short Answer
  3. Verified Facts
  4. Inference
  5. Cross-Source Notes
  6. Uncertainties or Limits
  7. Sources

Writing Rules

In Short Answer:

  • answer directly
  • keep it concise

In Key Findings:

  • separate confirmed facts from implications
  • prioritize evidence from official or primary sources

In Cross-Source Notes:

  • explain where sources agree
  • explain where they differ
  • mention version, timing, regional, or plan differences when relevant

In Verified Facts for predictive or outlook questions:

  • include only directly supported facts
  • keep interpretation minimal
  • attach stronger sources first

In Inference for predictive or outlook questions:

  • derive each inference from the verified facts above
  • do not present inference as confirmed fact
  • explicitly signal when the inference depends on policy, timing, or earnings assumptions

In Uncertainties or Limits:

  • clearly state what could not be verified
  • do not hide missing evidence

In Sources:

  • list the most useful sources, not every weak result

Fast Path

Use a fast path only when:

  • the question is simple
  • there is a clear primary source
  • there is little risk of ambiguity

Even then:

  • check the primary source
  • add one independent supporting source if practical

Example Handling Pattern

If the user asks:

  • /net What is the best agent framework right now, and use it to help me design a game?

Then:

  • classify as Mode D with Mode C secondary
  • compare current agent framework candidates using official docs, GitHub, releases, and stable public references
  • decide which framework best fits the requested goal
  • then outline a game-building workflow using that framework
  • clearly separate:
    • evidence for framework selection
    • implementation guidance for the game workflow

Final Reminder

Research first. Structure the evidence second. Answer last.

Comments

Loading comments...