Install
openclaw skills install advisor-finderFind and analyze potential academic advisors, supervisors, or faculty for a target university, school, department, or research direction. Use when the user asks to find professors/mentors/PI/faculty, screen advisors for master's or PhD applications, compare faculty fit, identify recent papers, or rank who is most worth contacting based on research match, activity, and public profile evidence.
openclaw skills install advisor-finderFind candidate advisors for a target university or department, verify what they actually work on, and rank who is most worth contacting.
Turn a vague request like “help me find professors doing LLMs at Zhejiang University Software College” into a defensible shortlist with evidence.
The skill must not stop at listing names. It must:
Identify as many of these as the user already gave:
If key inputs are missing, make the smallest reasonable assumption and continue. Ask follow-up only when the ambiguity would materially change the result.
Prefer evidence in this order:
Read references/github-datasets.md when the official school site is weak and the target field is computer science or nearby areas.
Do not treat one old paper as proof of a current research direction. Look for repeated recent evidence.
Find the official university page and then the official school / department / faculty directory page.
Record:
If the school is a top university or the site is messy, read references/top-university-sites.md first for likely official entry points.
If the target is a major Chinese university and the user is looking for economics/management, computer science, software, or data science related schools, also read references/china-top-university-hints.md for common official entry patterns.
If the official directory is poor, supplement with official lab pages and faculty personal pages.
If the site blocks scraping or key fields are hidden, follow references/site-failure-playbook.md instead of guessing.
Collect candidate faculty members with at least:
Before moving on, assign a pool completeness label:
If the target field is computer science, AI, CV, NLP, systems, ML, HCI, security, or robotics, and the official faculty pool is weak, consult references/github-datasets.md for CSrankings-style recovery.
Exclude obvious non-target roles unless the user asked broadly:
Use official profile text, homepage text, and lab descriptions to do a coarse filter.
Examples:
Keep borderline candidates for verification rather than dropping them too early.
For each shortlisted candidate, inspect recent papers and profile evidence.
Minimum checks:
Use paper-search or scholar-style skills if available. Use paper-parse only after a specific paper is worth reading deeply.
For each candidate, write a compact profile covering:
Use references/scoring-template.md as a soft rubric. Do not fake precision, but do make the ranking logic explicit.
If the user gave background info, compare:
If the user gave no background, rank only by target-direction fit and activity.
Produce a shortlist with explicit ranking bands:
Use soft scoring, not fake precision. Judge each dimension as High / Medium / Low or 0-5.
Suggested dimensions:
Do not hide uncertainty. If evidence is weak, say so.
Start with a brief scope line:
Then output a ranked table or bullet list.
For each faculty member include:
Then end with:
Do not:
If the user asks for deeper analysis of one professor:
If the user asks for many schools:
If a school website is especially bad:
Write in direct Chinese. Prefer short, useful judgments over inflated academic prose. The user should be able to read the result and immediately know who is worth contacting first and why.