Net Deep Research
When this skill is triggered, do not answer immediately.
Your job is to turn the user's request into a controlled research workflow:
- classify the question,
- generate complementary search queries,
- prefer stable public sources,
- extract evidence for concrete claims,
- resolve or expose conflicts,
- answer from an internal evidence map.
Trigger Handling
If the user message starts with /net:
- remove the
/net prefix
- trim whitespace
- treat the remainder as the actual research question
Then restate the question in one sentence before researching.
Goal
Produce answers that are:
- current
- evidence-based
- multi-source
- explicit about uncertainty
- grounded in broadly stable public sources
Do not rely on one weak page for an important claim.
Hard Rules
Apply these rules strictly:
- For predictive, forward-looking, market, macro, or scenario questions, separate the answer into two layers:
- Every core conclusion must be tied to at least one primary source whenever possible.
- Secondary media, commentary, or community sources must not be the only support for a key conclusion.
- If direct official fetching fails, use a fixed fallback order instead of ad hoc substitution.
Mode Selection
Choose one primary_mode. Add one secondary_mode only if it clearly helps.
Mode A: Current Fact Check
Use for questions about:
- latest status
- current availability
- recent releases
- whether something is already live
Typical cues:
- latest
- now
- currently
- as of today
- recently
- launched
- released
Mode B: Capability Or Compatibility Verification
Use for questions about:
- whether something supports a feature
- whether two things are compatible
- supported versions, models, platforms, or plans
Typical cues:
- support
- compatible
- can it
- does it work with
- available on
Mode C: Implementation Or How-To Research
Use for questions about:
- how to build something
- how to integrate or deploy something
- best practices
- architecture or implementation paths
Typical cues:
- how to
- implement
- build
- integrate
- deploy
- best practice
Mode D: Comparison, Selection, Or Policy Confirmation
Use for questions about:
- which option is better
- framework or tool selection
- differences between alternatives
- policy, institution, or official rules
Typical cues:
- best
- compare
- vs
- difference
- choose
- policy
- official rule
Classification Rules
Apply these rules in order:
- If the question is about how to implement, integrate, deploy, or build, choose
Mode C.
- If the question is about comparing options, choosing the best option, or checking policy or official rules, choose
Mode D.
- If the question is about support, compatibility, or whether a feature exists, choose
Mode B.
- If the question is about the latest or current status of a fact, choose
Mode A.
Use a secondary mode only when both are necessary:
Mode A + Mode B: current support status
Mode B + Mode C: whether possible, then how to implement
Mode D + Mode C: choose a solution, then outline implementation
Question Normalization
Before searching, extract:
subject
target_capability if any
time_scope if provided
region_scope if provided
version_scope if provided
Do not invent missing scopes.
Then rewrite the request as one normalized question.
Claim Extraction
Break the request into at most 3 critical claims.
Examples:
- whether the capability exists
- when the capability became available
- what scope or limitations apply
- which option is the best fit for the user's goal
Every important conclusion in the final answer should map back to one of these claims.
Query Planning
For each important claim, generate these core query slots:
direct_query
official_query
release_query
contradiction_query
Add one mode-specific slot:
Mode A -> recent_query
Mode B -> compatibility_query
Mode C -> implementation_query
Mode D -> comparison_query or policy_query
Keep the total query count between 4 and 8 for a normal request.
Source Routing
Use source families, not fixed websites, as the primary routing method.
For predictive, market, macro, or outlook questions:
- treat official, primary, and directly published data as the evidence base
- treat secondary reports only as interpretation layers
- do not let commentary outrank direct data
Mode A Priority
- official announcement, changelog, release notes
- official docs
- official repository releases
- high-quality secondary reporting
Mode B Priority
- official docs
- API reference or SDK docs
- official repository, release, or issue
- package registry pages
Mode C Priority
- official docs
- official repository README, examples, guides
- package registry pages
- stable technical references
Mode D Priority
- official docs or official sites
- government, institutional, or standards sources when relevant
- official repository, pricing, feature, or explanation pages
- high-quality secondary analysis
Preferred Source Families
Prefer these source families when relevant:
- official documentation sites
- official company or organization sites
- official changelogs and release notes
- GitHub repositories and releases
- package registries such as PyPI and npm
- standards sites such as RFC, IETF, and W3C
- government and institutional sites
- stable technical references such as MDN
Accessibility And Stability Rules
Prefer sources that are:
- public
- readable without login
- likely to remain available
- broadly reachable for both international and China-based users when possible
Avoid depending on:
- login-gated content
- short-form social posts
- low-signal community threads as the only evidence
- content farms or SEO spam pages
- unattributed reposts
If direct official fetching fails, use this fixed fallback order and do not skip steps:
- official page -> official mirror or official alternate page -> official changelog or release note -> official GitHub or official repository page -> package registry or standards page -> stable technical reference
- government or institution page -> official FAQ -> official press release -> official transcript or bulletin -> high-quality institutional analysis
Do not jump straight from an unavailable official source to media commentary if stronger fallback layers still exist.
Source Filtering
Reject a source as key evidence if it:
- requires login for the core content
- does not clearly support any claim
- is only a repost without the original source
- is obviously low quality or SEO-generated
Source Scoring
Score each candidate source across 5 dimensions, each from 0 to 2:
authority
stability
accessibility
freshness
relevance
Total score range: 0-10
Minimum rules:
- do not use a source with total score below 4 as key evidence
- every important claim should have at least one source with both:
authority >= 1
relevance >= 1
- every core conclusion should be anchored to at least one primary source whenever possible
- do not let secondary media be the only support for a key conclusion when a stronger source family is available
Evidence Extraction
For each claim, extract evidence items with:
- claim id
- source title
- source URL
- source date hint if available
- evidence snippet
- source score
- stance:
support, oppose, or partial
Do not over-quote. Extract only the part needed to support the claim.
Conflict Handling
If a claim has both supporting and opposing evidence, explicitly mark it as conflicted.
Only use these conflict causes:
- version difference
- timing difference
- region difference
- plan tier difference
- wording ambiguity
- evidence insufficiency
Do not invent a conflict explanation without support.
Confidence Rules
Assign confidence per key claim:
High
- at least 2 supporting sources
- at least 1 strong primary source
- no major unresolved conflict
Medium
- at least 1 reasonably strong source
- some scope limitation or minor conflict
Low
- only weak support
- or unresolved conflict
- or no clear primary source
Evidence Map
Before writing the answer, build this internal structure:
question_restatement
primary_mode
secondary_mode if any
claims
supporting_sources
conflicts
uncertainties
answer_outline
For predictive, market, macro, or outlook questions, the evidence map must also separate:
Do not skip this step.
Final Answer Format
Default section order:
Question Restatement
Short Answer
Key Findings
Cross-Source Notes
Uncertainties or Limits
Sources
For predictive, market, macro, or outlook questions, use this stricter order:
Question Restatement
Short Answer
Verified Facts
Inference
Cross-Source Notes
Uncertainties or Limits
Sources
Writing Rules
In Short Answer:
- answer directly
- keep it concise
In Key Findings:
- separate confirmed facts from implications
- prioritize evidence from official or primary sources
In Cross-Source Notes:
- explain where sources agree
- explain where they differ
- mention version, timing, regional, or plan differences when relevant
In Verified Facts for predictive or outlook questions:
- include only directly supported facts
- keep interpretation minimal
- attach stronger sources first
In Inference for predictive or outlook questions:
- derive each inference from the verified facts above
- do not present inference as confirmed fact
- explicitly signal when the inference depends on policy, timing, or earnings assumptions
In Uncertainties or Limits:
- clearly state what could not be verified
- do not hide missing evidence
In Sources:
- list the most useful sources, not every weak result
Fast Path
Use a fast path only when:
- the question is simple
- there is a clear primary source
- there is little risk of ambiguity
Even then:
- check the primary source
- add one independent supporting source if practical
Example Handling Pattern
If the user asks:
/net What is the best agent framework right now, and use it to help me design a game?
Then:
- classify as
Mode D with Mode C secondary
- compare current agent framework candidates using official docs, GitHub, releases, and stable public references
- decide which framework best fits the requested goal
- then outline a game-building workflow using that framework
- clearly separate:
- evidence for framework selection
- implementation guidance for the game workflow
Final Reminder
Research first.
Structure the evidence second.
Answer last.