Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Rental Helper

v1.3.0

租房助手 - 帮助用户记录房源信息、计算租房预算、生成对比表格、提供租房避坑指南、智能推荐房源、批量导入、网页解析、图片识别、网站抓取。使用场景:(1) 记录和筛选房源信息 - 说"记录一个新房源"或"查看我的房源列表";(2) 计算租房预算 - 说"帮我算一下租房预算"或"这个房子每月要花多少钱";(3) 生成...

0· 101·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sosshuai/rental-helper.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Rental Helper" (sosshuai/rental-helper) from ClawHub.
Skill page: https://clawhub.ai/sosshuai/rental-helper
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install rental-helper

ClawHub CLI

Package manager switcher

npx clawhub@latest install rental-helper
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description match the included scripts: listing add/list, recommendation, budget calc, OCR, URL parsing and site scraping. All declared capabilities are implemented by the scripts and no unrelated services or credentials are required.
Instruction Scope
SKILL.md instructs the agent to call local scripts, accept user-provided URLs/images/CSV files, and save data; it also explains interactive browser login for scraping. The instructions do not ask for extra environment secrets or to read unrelated system files. Note: the scripts perform network requests to target listing sites and may prompt the user to login in a real browser (the browser login happens on the user's machine, not inside the skill).
Install Mechanism
There is no formal install spec (instruction-only), but several scripts recommend installing dependencies (selenium, webdriver-manager, pytesseract/easyocr). At runtime webdriver-manager will download browser driver binaries; crawl_listings.py disables SSL certificate verification for fetching pages (see next dimension). These are common for scraping but increase runtime network activity and should be reviewed by the user.
Credentials
The skill declares no required environment variables, credentials, or external config paths. Scripts operate on local files under ~/.openclaw/workspace/rental-data which is proportional to the stated purpose of storing listings and viewings.
Persistence & Privilege
always:false and the skill writes only to its own data directory (~/.openclaw/workspace/rental-data). It does not modify other skills or system-wide agent settings. Autonomous invocation is allowed (platform default) and appropriate for a user-invoked assistant.
Assessment
This skill is internally coherent for managing and scraping rental listings, but review the following before installing/using: 1) Local storage: scraped data, contacts and phone numbers are saved to ~/.openclaw/workspace/rental-data — treat that as potentially sensitive personal data and back it up or secure it appropriately. 2) SSL verification: crawl_listings.py intentionally disables SSL certificate verification (ssl._create_default_https_context = ssl._create_unverified_context) which weakens network security and can expose you to MITM attacks when fetching pages — consider removing that line or running the script only on trusted networks. 3) Webdriver/runtime downloads: selenium + webdriver-manager will download browser drivers at runtime; verify these components and run in a trusted environment. 4) Interactive login: crawl_interactive.py asks you to login in a real browser (QR/code). Do not paste credentials into the skill — perform the login in the browser as instructed. 5) Dependencies: OCR and scraping require external packages (pytesseract, easyocr, tesseract, selenium); install them from trusted package sources and verify versions. 6) Legal/ToS: scraping sites may violate target sites' terms of service — use responsibly and respect robots.txt. If you want higher assurance, ask for the omitted files to be reviewed or run the code in an isolated environment (container/VM) before giving it regular access to your data.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cmf2eex5vj50wsgmqs85gw184e7vy
101downloads
0stars
4versions
Updated 2w ago
v1.3.0
MIT-0

租房助手

帮助用户高效管理租房全流程,从房源发现、记录、对比到最终决策。

功能模块

1. 房源记录与筛选

记录新房源:

用户说:"记录一个新房源"
需要收集的信息:
- 房源地址/小区名称
- 租金(月租)
- 押金方式(押一付一/押一付三等)
- 户型(单间/整租/几室几厅)
- 面积
- 楼层/电梯
- 朝向
- 装修情况/卫生环境
- 交通情况(距离地铁站/公交站)
- 周边配套
- 距离公司/目标地点的通勤时间
- 看房时间/联系方式
- 优缺点备注
- 房源来源(贝壳/链家/豆瓣/闲鱼等)
- 房源链接/图片(查看房间环境)

查看房源列表:

  • 列出所有记录的房源
  • 支持按租金、面积、位置、户型、装修等筛选

查看单个房源详情:

  • 显示该房源的所有信息

更新房源状态:

  • 标记为"待考虑"/"已看房"/"有意向"/"已放弃"/"已签约"

2. 智能推荐房源

基于位置推荐:

用户说:
- "给我推荐几套房源"
- "我公司在xxx,给我推荐走路10分钟能到的房源,价格在xx以内"
- "我要租房,位置在xx,给我推荐附近3KM,离地铁或公交车站比较近的房源"
- "我要租房,房型单间或者整租,卫生环境要求干净"

处理流程:
1. 询问用户的租房需求(如果未提供)
   - 目标位置(公司/学校/商圈)
   - 预算范围
   - 通勤方式(步行/地铁/公交)
   - 通勤时间限制
   - 房型要求(单间/整租/合租)
   - 其他要求(装修、楼层、朝向等)

2. 从已记录的房源中筛选匹配项

3. 按匹配度排序推荐
   - 通勤便利性
   - 价格合理性
   - 综合评分

推荐算法:

  • 距离计算:根据交通方式和时间筛选
  • 价格匹配:在预算范围内
  • 房型匹配:符合用户要求
  • 综合评分:结合装修、配套等因素

3. 看房记录与评分

看房时记录:

用户说:"我在看房,想记录每个房子的优缺点"

记录内容:
- 房源ID
- 看房时间
- 实际与描述是否一致
- 采光情况(1-5分)
- 噪音情况(1-5分)
- 卫生状况(1-5分)
- 交通便利性(1-5分)
- 周边配套(1-5分)
- 房东/中介态度
- 优缺点详细记录
- 整体评分(1-10分)
- 是否考虑签约

4. 租房预算计算

计算总成本:

输入:租金、押金方式、中介费、其他费用(物业费、网费等)
输出:
- 首月支出(押金+首月租金+中介费+其他)
- 月均成本(租金+物业费+网费+水电煤预估)
- 年总成本

预算建议:

  • 根据月收入给出租金占比建议(通常不超过30%)

5. 房源对比表格

生成对比表:

  • 选择2-5个房源进行对比
  • 对比维度:租金、面积、单价、交通、配套、优缺点、看房评分等
  • 输出格式:Markdown 表格或飞书表格

6. 租房避坑指南

查看指南:

  • 租房前注意事项
  • 看房检查清单
  • 合同签订要点
  • 常见陷阱识别

详见 references/pitfall-guide.md

7. 批量导入房源

从CSV/Excel导入:

用户说:"批量导入房源"
支持格式:
- CSV文件(逗号分隔)
- Excel文件(.xlsx/.xls)

必需字段:name(小区名称)、rent(租金)
可选字段:deposit、room_type、area、floor、orientation、decoration、transport、facilities、contact、pros、cons、source、url

8. 网页链接解析

粘贴链接自动解析:

用户说:"帮我解析这个链接" + 粘贴URL
支持平台:
- 贝壳找房 (ke.com)
- 链家 (lianjia.com)
- 豆瓣租房小组 (douban.com)
- 58同城 (58.com)
- 安居客 (anjuke.com)
- 其他通用网页

自动提取:小区名称、租金、户型、面积、描述等

9. 图片识别(OCR)

上传房源截图:

用户说:"从这张图片提取房源信息" + 上传截图
自动识别:
- 小区名称
- 租金
- 户型
- 面积
- 联系方式
- 交通信息
- 房源描述

需要安装OCR工具:
- 方案1: pip install pytesseract pillow + brew install tesseract tesseract-lang
- 方案2: pip install easyocr

10. 网站抓取房源

自动抓取租房网站:

用户说:"从贝壳抓取北京朝阳区5000元以内的房源"
支持平台:
- 贝壳找房 (ke.com)
- 链家 (lianjia.com)
- 58同城 (58.com)
- 安居客 (anjuke.com)

抓取参数:
- 城市(北京、上海、广州、深圳等)
- 区域/商圈
- 预算上限
- 抓取数量

交互式抓取(推荐): 当网站需要登录时,自动打开浏览器并提示用户扫码/验证码登录:

python scripts/crawl_interactive.py --platform 58 --city 成都 --area 春熙路

流程:

  1. 自动打开浏览器访问网站
  2. 检测是否需要登录
  3. 提示用户扫码或输入验证码
  4. 用户登录完成后按回车继续
  5. 自动抓取房源数据

安装依赖:

pip3 install selenium webdriver-manager

房源数据默认存储在 ~/.openclaw/workspace/rental-data/listings.json 看房记录存储在 ~/.openclaw/workspace/rental-data/viewings.json

使用脚本

  • scripts/add_listing.py - 添加新房源
  • scripts/list_listings.py - 列出租源(支持筛选)
  • scripts/recommend_listings.py - 智能推荐房源
  • scripts/add_viewing.py - 记录看房信息
  • scripts/calculate_budget.py - 计算预算
  • scripts/compare_listings.py - 生成对比表
  • scripts/import_listings.py - 批量导入房源(CSV/Excel)
  • scripts/parse_url.py - 从网页链接解析房源
  • scripts/parse_image.py - 从图片识别房源信息(OCR)
  • scripts/crawl_listings.py - 从租房网站抓取房源
  • scripts/crawl_interactive.py - 交互式网页抓取(需要登录时提示用户)

工作流

记录新房源

  1. 询问用户房源基本信息
  2. 调用 scripts/add_listing.py 保存数据
  3. 确认记录成功

智能推荐房源

  1. 询问用户的租房需求(位置、预算、通勤等)
  2. 调用 scripts/recommend_listings.py 进行匹配
  3. 展示推荐结果,说明推荐理由

看房记录

  1. 询问用户看房的是哪个房源
  2. 引导用户逐项评分和记录
  3. 调用 scripts/add_viewing.py 保存记录
  4. 生成看房总结

计算租房预算

  1. 询问租金、押金方式等信息
  2. 调用 scripts/calculate_budget.py
  3. 展示预算分析结果

生成对比表格

  1. 询问要对比的房源ID或名称
  2. 调用 scripts/compare_listings.py
  3. 输出对比表格

查看避坑指南

  1. 读取 references/pitfall-guide.md
  2. 根据用户需求展示相关内容

批量导入房源

  1. 询问用户文件路径
  2. 调用 scripts/import_listings.py 导入数据
  3. 显示导入结果

网页链接解析

  1. 获取用户提供的链接
  2. 调用 scripts/parse_url.py 解析页面
  3. 显示提取的信息并确认保存

图片识别

  1. 获取用户上传的图片路径
  2. 调用 scripts/parse_image.py 进行OCR识别
  3. 显示提取的信息并确认保存

网站抓取

  1. 询问目标平台、城市、区域、预算
  2. 调用 scripts/crawl_listings.pyscripts/crawl_selenium.py
  3. 显示抓取结果并确认保存

Comments

Loading comments...